ia64/xen-unstable

view docs/user.tex @ 2628:98b7d1c6a2f7

bitkeeper revision 1.1159.1.224 (416c13771ZJIc_iC6ocXee7fZtM0wg)

updated docs
author sd386@font.cl.cam.ac.uk
date Tue Oct 12 17:25:11 2004 +0000 (2004-10-12)
parents 5604871d7e94
children 31706ae3b457
line source
1 \documentclass[11pt,twoside,final,openright]{xenstyle}
2 \usepackage{a4,graphicx,setspace}
3 \setstretch{1.15}
4 \input{style.tex}
6 \begin{document}
8 % TITLE PAGE
9 \pagestyle{empty}
10 \begin{center}
11 \vspace*{\fill}
12 \includegraphics{eps/xenlogo.eps}
13 \vfill
14 \vfill
15 \vfill
16 \begin{tabular}{l}
17 {\Huge \bf Users' manual} \\[4mm]
18 {\huge Xen v2.0 for x86} \\[80mm]
20 {\Large Xen is Copyright (c) 2004, The Xen Team} \\[3mm]
21 {\Large University of Cambridge, UK} \\[20mm]
22 {\large Last updated on 12th October, 2004}
23 \end{tabular}
24 \vfill
25 \end{center}
26 \cleardoublepage
28 % TABLE OF CONTENTS
29 \pagestyle{plain}
30 \pagenumbering{roman}
31 { \parskip 0pt plus 1pt
32 \tableofcontents }
33 \cleardoublepage
35 % PREPARE FOR MAIN TEXT
36 \pagenumbering{arabic}
37 \raggedbottom
38 \widowpenalty=10000
39 \clubpenalty=10000
40 \parindent=0pt
41 \renewcommand{\topfraction}{.8}
42 \renewcommand{\bottomfraction}{.8}
43 \renewcommand{\textfraction}{.2}
44 \renewcommand{\floatpagefraction}{.8}
45 \setstretch{1.15}
47 \newcommand{\path}[1]{{\tt #1}}
49 \part{Introduction and Tutorial}
50 \chapter{Introduction}
52 {\bf
53 DISCLAIMER: This documentation is currently under active development
54 and as such there may be mistakes and omissions --- watch out for
55 these and please report any you find to the developer's mailing list.
56 Contributions of material, suggestions and corrections are welcome.
57 }
59 Xen is a { \em paravirtualising } virtual machine monitor (VMM) or
60 ``Hypervisor'' for the x86 processor architecture. Xen can securely
61 multiplex heterogeneous virtual machines on a single physical with
62 near-native performance. The virtual machine technology facilitates
63 enterprise-grade functionality, including:
65 \begin{itemize}
66 \item Virtual machines with close to native performance.
67 \item Live migration of running virtual machines.
68 \item Excellent hardware support (use unmodified Linux device drivers).
69 \item Suspend to disk / resume from disk of running virtual machines.
70 \item Transparent copy on write disks.
71 \item Sandboxed, restartable device drivers.
72 \item Pervasive debugging - debug whole OSes, from kernel to applications.
73 \end{itemize}
75 Xen support is available for increasingly many operating systems. The
76 following OSs have either been ported already or a port is in
77 progress:
78 \begin{itemize}
79 \item Dragonfly BSD
80 \item FreeBSD 5.3
81 \item Linux 2.4
82 \item Linux 2.6
83 \item NetBSD 2.0
84 \item Plan 9
85 \item Windows XP
86 \end{itemize}
88 Right now, Linux 2.4 and 2.6 are available for Xen 2.0. NetBSD
89 port will be updated to run on Xen 2.0, hopefully in time for the NetBSD
90 2.0 release. It is intended that Xen support be integrated into the
91 official releases of Linux 2.6, NetBSD 2.0, FreeBSD and Dragonfly BSD.
93 Even running multiple copies of Linux can be very useful, providing a
94 means of containing faults to one OS image, providing performance
95 isolation between the various OS instances and trying out multiple
96 distros.
98 The Windows XP port is only available to those who have signed the
99 Microsoft Academic Source License.
101 Possible usage scenarios for Xen include:
102 \begin{description}
103 \item [Kernel development] test and debug kernel modifications in a
104 sandboxed virtual machine --- no need for a separate test
105 machine
106 \item [Multiple OS Configurations] run multiple operating systems
107 simultaneously, for instance for compatibility or QA purposes
108 \item [Server consolidation] move multiple servers onto one box,
109 provided performance and fault isolation at virtual machine
110 boundaries
111 \item [Cluster computing] improve manageability and efficiency by
112 running services in virtual machines, isolated from
113 machine-specifics and load balance using live migration
114 \item [High availability computing] run device drivers in sandboxed
115 domains for increased robustness
116 \item [Hardware support for custom OSes] export drivers from a
117 mainstream OS (e.g. Linux) with good hardware support
118 to your custom OS, avoiding the need for you to port existing
119 drivers to achieve good hardware support
120 \end{description}
122 \section{Structure}
124 \subsection{High level}
126 A Xen system has multiple layers. The lowest layer is Xen itself ---
127 the most privileged piece of code in the system. On top of Xen run
128 guest operating system kernels. These are scheduled pre-emptively by
129 Xen. On top of these run the applications of the guest OSs. Guest
130 OSs are responsible for scheduling their own applications within the
131 time allotted to them by Xen.
133 One of the domains --- { \em Domain 0 } --- is privileged. It is
134 started by Xen at system boot and is responsible for initialising and
135 managing the whole machine. Domain 0 builds other domains and manages
136 their virtual devices. It also performs suspend, resume and
137 migration of other virtual machines. Where it is used, the X server
138 is also run in domain 0.
140 Within Domain 0, a process called ``Xend'' runs to manage the system.
141 Xend is responsible for managing virtual machines and providing access
142 to their consoles. Commands are issued to Xend over an HTTP
143 interface, either from a command-line tool or from a web browser.
145 XXX need diagram(s) here to make this make sense
147 \subsection{Paravirtualisation}
149 Paravirtualisation allows very high performance virtual machine
150 technology, even on architectures (like x86) which are traditionally
151 hard to virtualise.
153 Paravirtualisation requires guest operating systems to be { \em ported
154 } to run on the VMM. This process is similar to a port of an
155 operating system to a new hardware platform. Although operating
156 system kernels must explicitly support Xen in order to run in a
157 virtual machine, { \em user space applications and libraries
158 do not require modification }.
160 \section{Hardware Support}
162 Xen currently runs on the x86 architecture, but could in principle be
163 ported to others. In fact, it would have been rather easier to write
164 Xen for pretty much any other architecture as x86 is particularly
165 tricky to handle. A good description of Xen's design, implementation
166 and performance is contained in the October 2003 SOSP paper, available
167 at:\\
168 {\tt http://www.cl.cam.ac.uk/netos/papers/2003-xensosp.pdf}\\
169 Work to port Xen to x86\_64 and IA64 is currently underway.
171 Xen is targeted at server-class machines, and the current list of
172 supported hardware very much reflects this, avoiding the need for us
173 to write drivers for "legacy" hardware. It is likely that some desktop
174 chipsets will fail to work properly with the default Xen
175 configuration: specifying {\tt noacpi} or {\tt ignorebiostables} when
176 booting Xen may help in these cases.
178 Xen requires a ``P6'' or newer processor (e.g. Pentium Pro, Celeron,
179 Pentium II, Pentium III, Pentium IV, Xeon, AMD Athlon, AMD Duron).
180 Multiprocessor machines are supported, and we also have basic support
181 for HyperThreading (SMT), although this remains a topic for ongoing
182 research. We're also working on an x86\_64 port (though Xen should
183 already run on these systems just fine in 32-bit mode).
185 Xen can currently use up to 4GB of memory. It is possible for x86
186 machines to address up to 64GB of physical memory but (unless an
187 external developer volunteers) there are no plans to support these
188 systems. The x86\_64 port is the planned route to supporting more
189 than 4GB of memory.
191 In contrast to previous Xen versions, in Xen 2.0 device drivers run
192 within a privileged guest OS rather than within Xen itself. This means
193 that we should be compatible with the majority of device hardware
194 supported by Linux. The default XenLinux build contains support for
195 relatively modern server-class network and disk hardware, but you can
196 add support for other hardware by configuring your XenLinux kernel in
197 the normal way (e.g. \verb_# make ARCH=xen xconfig_).
199 \section{History}
202 ``Xen'' is a Virtual Machine Monitor (VMM) originally developed by the
203 Systems Research Group of the University of Cambridge Computer
204 Laboratory, as part of the UK-EPSRC funded XenoServers project.
206 The XenoServers project aims to provide a ``public infrastructure for
207 global distributed computing'', and Xen plays a key part in that,
208 allowing us to efficiently partition a single machine to enable
209 multiple independent clients to run their operating systems and
210 applications in an environment providing protection, resource
211 isolation and accounting. The project web page contains further
212 information along with pointers to papers and technical reports:
213 {\tt http://www.cl.cam.ac.uk/xeno}
215 Xen has since grown into a project in its own right, enabling us to
216 investigate interesting research issues regarding the best techniques
217 for virtualizing resources such as the CPU, memory, disk and network.
218 The project has been bolstered by support from Intel Research
219 Cambridge, and HP Labs, who are now working closely with us. We're
220 also in receipt of support from Microsoft Research Cambridge to port
221 Windows XP to run on Xen.
223 Xen was first described in the 2003 paper at SOSP \\
224 ({\tt http://www.cl.cam.ac.uk/netos/papers/2003-xensosp.pdf}).
225 The first public release of Xen (1.0) was made in October 2003. Xen
226 was developed as a research project by the University of Cambridge
227 Computer Laboratory (UK). Xen was the first Virtual Machine Monitor
228 to make use of {\em paravirtualisation} to achieve near-native
229 performance virtualisation of commodity operating systems. Since
230 then, Xen has been extensively developed and is now used in production
231 scenarios on multiple sites.
233 Xen 2.0 is the latest release, featuring greatly enhanced hardware
234 support, configuration flexibility, usability and a larger complement
235 of supported operating systems. We think that Xen has the potential
236 to become {\em the} definitive open source virtualisation solution and
237 will work to conclusively achieve that position.
240 \chapter{Installation}
242 The Xen distribution includes three main components: Xen itself,
243 utilities to convert a standard Linux tree to run on Xen and the
244 userspace tools required to operate a Xen-based system.
246 This manual describes how to install the Xen 2.0 distribution from
247 source. Alternatively, there may be packages available for your
248 operating system distribution.
250 \section{Prerequisites}
251 \label{sec:prerequisites}
252 \begin{itemize}
253 \item A working installation of your favourite Linux distribution.
254 \item A working installation of the GRUB bootloader.
255 \item An installation of Twisted v1.3 or above (see {\tt
256 http://www.twistedmatrix.com}). There may be a package available for
257 your distribution; alternatively it can be installed by running {\tt \#
258 make install-twisted} in the root of the Xen source tree.
259 \item The Linux bridge control tools (see {\tt
260 http://bridge.sourceforge.net}). There may be a packages of these
261 tools available for your distribution.
262 \item Linux IP Routing Tools
263 \item make
264 \item python-dev
265 \item gcc
266 \item zlib-dev
267 \item libcurl
268 \item python2.3-pycurl
269 \item python2.3-twisted
270 \end{itemize}
272 \section{Optional}
273 \begin{itemize}
274 \item The Python logging package (see {\tt http://www.red-dove.com/})
275 for additional Xend logging functionality.
276 \end{itemize}
278 \section{Install Bitkeeper (Optional)}
280 To fetch a local copy, first download the BitKeeper tools.
281 Download instructions must be obtained by filling out the provided
282 form at: \\ {\tt
283 http://www.bitmover.com/cgi-bin/download.cgi }
285 The BitKeeper install program is designed to be run with X. If X is
286 not available, you can specify the install directory on the command
287 line.
289 \section{Download the Xen source code}
291 \subsection{Using Bitkeeper}
293 The public master BK repository for the 2.0 release lives at: \\
294 {\tt bk://xen.bkbits.net/xen-2.0.bk}. You can use Bitkeeper to
295 download it and keep it updated with the latest features and fixes.
297 Change to the directory in which you want to put the source code, then
298 run:
299 \begin{verbatim}
300 # bk clone bk://xen.bkbits.net/xen-2.0.bk
301 \end{verbatim}
303 Under your current directory, a new directory named `xen-2.0.bk'
304 has been created, which contains all the source code for the Xen
305 hypervisor and the Xen tools. The directory also contains `sparse'
306 Linux source trees, containing only the files that differ between
307 XenLinux and standard Linux.
309 Once you have cloned the repository, you can update to the newest
310 changes to the repository by running:
311 \begin{verbatim}
312 # cd xen-2.0.bk # to change into the local repository
313 # bk pull # to update the repository
314 \end{verbatim}
316 \subsection{Without Bitkeeper}
318 The Xen source tree is also available in gzipped tarball form from the
319 Xen downloads page:\\
320 {\tt http://www.cl.cam.ac.uk/Research/SRG/netos/xen/downloads.html}.
321 Prebuilt tarballs are also available but are very large.
323 \section{The distribution}
325 The Xen source code repository is structured as follows:
327 \begin{description}
328 \item[\path{tools/}] Xen node controller daemon (Xend), command line tools,
329 control libraries
330 \item[\path{xen/}] The Xen hypervisor itself.
331 \item[\path{linux-2.4.27-xen/}] Linux 2.4 support for Xen
332 \item[\path{linux-2.6.8.1-xen/}] Linux 2.6 support for Xen
333 \item[\path{docs/}] various documentation files for users and developers
334 \item[\path{extras/}] currently this contains the Mini OS, aimed at developers
335 \end{description}
337 \section{Build and install}
339 The Xen makefile includes a target ``world'' that will do the
340 following:
342 \begin{itemize}
343 \item Build Xen
344 \item Build the control tools, including Xend
345 \item Download (if necessary) and unpack the Linux 2.6 source code,
346 and patch it for use with Xen
347 \item Build a Linux kernel to use in domain 0 and a smaller
348 unprivileged kernel, which can optionally be used for
349 unprivileged virtual machines.
350 \end{itemize}
352 Inspect the Makefile if you want to see what goes on during a
353 build. Building Xen and the tools is straightforward, but XenLinux is
354 more complicated. The makefile needs a `pristine' linux kernel tree
355 which it will then add the Xen architecture files to. You can tell the
356 makefile the location of the appropriate linux compressed tar file by
357 setting the LINUX\_SRC environment variable, e.g. \\
358 \verb!# LINUX_SRC=/tmp/linux-2.6.8.1.tar.bz2 make world! \\ or by
359 placing the tar file somewhere in the search path of {\tt LINUX\_SRC\_PATH}
360 which defaults to ``{\tt .:..}". If the makefile can't find a suitable
361 kernel tar file it attempts to download it from kernel.org (this won't
362 work if you're behind a firewall).
364 After untaring the pristine kernel tree, the makefile uses the {\tt
365 mkbuildtree} script to add the Xen patches the kernel. It then builds
366 two different XenLinux images, one with a ``-xen0'' extension which
367 contains hardware device drivers and drivers for Xen's virtual devices,
368 and one with a ``-xenU'' extension that just contains the virtual ones.
370 The procedure is similar to build the Linux 2.4 port: \\
371 \verb!# LINUX_SRC=/path/to/linux2.4/source make linux24!
373 In both cases, if you have an SMP machine you may wish to give the
374 {\tt '-j4'} argument to make to get a parallel build.
376 XXX Insert details on customising the kernel to be built.
377 i.e. merging config files
379 The files produced by the build process are stored under the
380 \path{install/} directory. To install them in their default
381 locations, do: \\
382 \verb_# make install_\\
384 Alternatively, users with special installation requirements may wish
385 to install them manually by copying file to their appropriate
386 destinations.
388 Take a look at the files in \path{install/boot/}:
389 \begin{itemize}
390 \item \path{install/boot/xen.gz} The Xen 'kernel'
391 \item \path{install/boot/vmlinuz-2.6.8.1-xen0} Domain 0 XenLinux kernel
392 \item \path{install/boot/vmlinuz-2.6.8.1-xenU} Unprivileged XenLinux kernel
393 \end{itemize}
395 The difference between the two Linux kernels that are built is due to
396 the configuration file used for each. The "U" suffixed unprivileged
397 version doesn't contain any of the physical hardware device drivers
398 --- it is 30\% smaller and hence may be preferred for your
399 non-privileged domains. The ``0'' suffixed privileged version can be
400 used to boot the system, as well as in driver domains and unprivileged
401 domains.
403 The \path{install/boot} directory will also contain the config files
404 used for building the XenLinux kernels, and also versions of Xen and
405 XenLinux kernels that contain debug symbols (\path{xen-syms} and
406 \path{vmlinux-syms-2.4.27-xen0}) which are essential for interpreting crash
407 dumps. Retain these files as the developers may wish to see them if
408 you post on the mailing list.
410 \section{Configuration}
412 \subsection{GRUB Configuration}
414 An entry should be added to \path{grub.conf} (often found under
415 \path{/boot/} or \path{/boot/grub/}) to allow Xen / XenLinux to boot.
416 This file is sometimes called \path{menu.lst}, depending on your
417 distribution. The entry should look something like the following:
419 \begin{verbatim}
420 title Xen 2.0 / XenLinux 2.6.8.1
421 kernel /boot/xen.gz dom0_mem=131072 com1=115200,8n1
422 module /boot/vmlinuz-2.6.8.1-xen0 root=/dev/sda4 ro console=tty0 console=ttyS0
423 \end{verbatim}
425 The first line of the configuration (kernel...) tells GRUB where to
426 find Xen itself and what boot parameters should be passed to it. The
427 second line of the configuration describes the location of the
428 XenLinux kernel that Xen should start and the parameters that should
429 be passed to it.
431 As always when installing a new kernel, it is recommended that you do
432 not remove the original contents of \path{menu.lst} --- you may want
433 to boot up with your old Linux kernel in future, particularly if you
434 have problems.
436 XXX insert distro specific stuff in here (maybe)
437 Suse 9.1: no 'ro' option
439 \subsection{Serial Console}
441 In order to configure serial console output, it is necessary to add a
442 line into \path{/etc/inittab}. The XenLinux console driver is
443 designed to make this procedure the same as configuring a normal
444 serial console. Add the line:
446 {\tt c:2345:respawn:/sbin/mingetty ttyS0}
448 XXX insert distro specific stuff in here (maybe)
449 Suse 9.1: different boot scheme (/etc/init.d/)
451 \section{Test the new install}
453 It should now be possible to restart the system and use Xen. Reboot
454 as usual but choose the new Xen option when the Grub screen appears.
456 What follows should look much like a conventional Linux boot. The
457 first portion of the output comes from Xen itself, supplying low level
458 information about itself and the machine it is running on. The
459 following portion of the output comes from XenLinux itself.
461 You may see some errors during the XenLinux boot. These are not
462 necessarily anything to worry about --- they may result from kernel
463 configuration differences between your XenLinux kernel and the one you
464 usually use.
466 When the boot completes, you should be able to log into your system as
467 usual. If you are unable to log in to your system running Xen, you
468 should still be able to reboot with your normal Linux kernel.
471 \chapter{Starting a domain}
473 The first step in creating a new domain is to prepare a root
474 filesystem for it to boot off. Typically, this might be stored in a
475 normal partition, a disk file and LVM volume, or on an NFS server.
477 A simple way to do this is simply to boot from your standard OS
478 install CD and install the distribution into another partition on your
479 hard drive.
481 {\em N.b } you can boot with Xen and XenLinux without installing any
482 special userspace tools but will need to have the prerequisites
483 described in Section~\ref{sec:prerequisites} and the Xen control tools
484 are installed before you proceed.
486 \section{From the web interface}
488 Boot the Xen machine and start Xensv (see Chapter~\ref{cha:xensv} for
489 more details) using the command: \\
490 \verb_# xensv start_ \\
491 This will also start Xend (see Chapter~\ref{cha:xend} for more information).
493 The domain management interface will then be available at {\tt
494 http://your\_machine:8080/}. This provides a user friendly wizard for
495 starting domains and functions for managing running domains.
497 \section{From the command line}
499 Full details of the {\tt xm} tool are found in Chapter~\ref{cha:xm}.
501 This example explains how to use the \path{xmdefaults} file. If you
502 require a more complex setup, you will want to write a custom
503 configuration file --- details of the configuration file formats are
504 included in Chapter~\ref{cha:config}.
506 The \path{xmdefconfig1} file is a simple template configuration file
507 for describing a single VM.
509 The \path{xmdefconfig2} file is a template description that is intended
510 to be reused for multiple virtual machines. Setting the value of the
511 {\tt vmid} variable on the {\tt xm} command line
512 fills in parts of this template.
514 \subsection{Editing \path{xmdefconfig}}
516 At minimum, you should edit the following variables in \path{xmdefconfig}:
518 \begin{description}
519 \item[kernel] Set this to the path of the kernel you compiled for use
520 with Xen. [e.g. {\tt kernel =
521 '/root/xen-2.0.bk/install/boot/vmlinuz-2.4.27-xenU'}]
522 \item[memory] Set this to the size of the domain's memory in
523 megabytes. [e.g. {\tt memory = 64 } ]
524 \item[disk] Set the first entry in this list to calculate the offset
525 of the domain's root partition, based on the domain ID. Set the
526 second to the location of \path{/usr} (if you are sharing it between
527 domains). [i.e. {\tt disk = ['phy:your\_hard\_drive\%d,sda1,w' \%
528 (base\_partition\_number + vmid), 'phy:your\_usr\_partition,sda6,r' ]}
529 \item[dhcp] Uncomment the dhcp variable, so that the domain will
530 receive its IP address from a DHCP server. [i.e. {\tt dhcp=''dhcp''}]
531 \end{description}
533 You may also want to edit the {\bf vif} variable in order to choose
534 the MAC address of the virtual ethernet interface yourself. For
535 example: \\ \verb_vif = [`mac=00:06:AA:F6:BB:B3']_\\ If you do not set
536 this variable, Xend will automatically generate a random MAC address
537 from an unused range.
539 \subsection{Starting the domain}
541 The {\tt xm} tool provides a variety of commands for managing domains.
542 Use the {\tt create} command to start new domains. To start the
543 virtual machine with virtual machine ID 1.
545 \begin{verbatim}
546 # xm create -c vmid=1
547 \end{verbatim}
549 The {\tt -c} switch causes {\tt xm} to turn into the domain's console
550 after creation. The {\tt vmid=1} sets the {\tt vmid} variable used in
551 the {\tt xmdefconfig} file. The tool uses the
552 \path{/etc/xen/xmdefconfig} file, since no custom configuration file
553 was specified on the command line.
555 \chapter{Domain management tasks}
557 The previous chapter described a simple example of how to configure
558 and start a domain. This chapter summarises the tools available to
559 manage running domains.
561 \section{Command line management}
563 Command line management tasks are also performed using the {\tt xm}
564 tool. For online help for the commands available, type:\\
565 \verb_# xm help_
567 \subsection{Basic management commands}
569 The most important {\tt xm} commands are: \\
570 \verb_# xm list_ : Lists all domains running. \\
571 \verb_# xm consoles_ : Gives information about the domain consoles. \\
572 \verb_# xm console_: open a console to a domain.
573 e.g. \verb_# xm console 1_ (open console to domain 1)
575 \subsection{\tt xm list}
577 The output of {\tt xm list} is in rows of the following format:\\
578 \verb_domid name memory cpu state cputime_
580 \begin{description}
581 \item[domid] The number of the domain ID this virtual machine is running in.
582 \item[name] The descriptive name of the virtual machine.
583 \item[memory] Memory size in megabytes.
584 \item[cpu] The CPU this domain is running on.
585 \item[state] Domain state consists of 5 fields:
586 \begin{description}
587 \item[r] running
588 \item[b] blocked
589 \item[p] paused
590 \item[s] shutdown
591 \item[c] crashed
592 \end{description}
593 \item[cputime] How much CPU time (in seconds) the domain has used so far.
594 \end{description}
596 The {\tt xm list} command also supports a long output format when the
597 {\tt -l} switch is used. This outputs the fulls details of the
598 running domains in Xend's SXP configuration format.
600 \chapter{Other kinds of storage}
602 It is possible to use any Linux block device to store virtual machine
603 disk images. This chapter covers some of the possibilities; note that
604 it is also possible to use network-based block devices and other
605 unconventional block devices.
607 \section{File-backed virtual block devices}
609 It is possible to use a file in Domain 0 as the primary storage for a
610 virtual machine. As well as being convenient, this also has the
611 advantage that the virtual block device will be {\em sparse} --- space
612 will only really be allocated as parts of the file are used. So if a
613 virtual machine uses only half its disk space then the file really
614 takes up a half of the size allocated.
616 For example, to create a 2GB sparse file-backed virtual block device
617 (actually only consumes 1KB of disk):
619 \verb_# dd if=/dev/zero of=vm1disk bs=1k seek=2048k count=1_
621 Choose a free loop back device, and attach file: \\
622 \verb_# losetup /dev/loop0 vm1disk_ \\
623 Make a file system on the loop back device: \\
624 \verb_# mkfs -t ext3 /dev/loop0_
626 Populate the file system e.g. by copying from the current root:
627 \begin{verbatim}
628 # mount /dev/loop0 /mnt
629 # cp -ax / /mnt
630 \end{verbatim}
631 Tailor the file system by editing \path{/etc/fstab},
632 \path{/etc/hostname}, etc (don't forget to edit the files in the
633 mounted file system, instead of your domain 0 filesystem, e.g. you
634 would edit \path{/mnt/etc/fstab} instead of \path{/etc/fstab} ). For
635 this example put \path{/dev/sda1} to root in fstab.
637 Now unmount (this is important!):\\
638 \verb_# umount /dev/loop0_
640 In the configuration file set:\\
641 \verb_disk = [`phy:loop0,sda1,w']_
643 As the virtual machine writes to its `disk', the sparse file will be
644 filled in and consume more space up to the original 2GB.
646 {\em NB.} You will need to use {\tt losetup} to bind the file to
647 \path{/dev/loop0} (or whatever loopback device you chose) each time
648 you reboot domain 0. In the near future, Xend will track which loop
649 devices are currently free and do binding itself, making this manual
650 effort unnecessary.
652 \section{LVM-backed virtual block devices}
654 XXX Put some simple examples here - would be nice if an LVM user could
655 contribute some, although obviously users would have to read the LVM
656 docs to do advanced stuff.
658 \part{Quick Reference}
660 \chapter{Domain Configuration Files}
661 \label{cha:config}
663 XXX Could use a little explanation about possible values
665 Xen configuration files contain the following standard variables:
667 \begin{description}
668 \item[kernel] Path to the kernel image (on the server).
669 \item[ramdisk] Path to a ramdisk image (optional).
670 \item[builder] The name of the domain build function (e.g. {\tt'linux'} or {\tt'netbsd'}.
671 \item[memory] Memory size in megabytes.
672 \item[cpu] CPU to assign this domain to.
673 \item[nics] Number of virtual network interfaces.
674 \item[vif] List of MAC addresses (random addresses are assigned if not given).
675 \item[disk] Regions of disk to export to the domain.
676 \item[dhcp] Set to {\tt 'dhcp'} if you want to DHCP allocate the IP address.
677 \item[netmask] IP netmask.
678 \item[gateway] IP address for the gateway (if any).
679 \item[hostname] Set the hostname for the virtual machine.
680 \item[root] Set the root device.
681 \item[nfs\_server] IP address for the NFS server.
682 \item[nfs\_root] Path of the root filesystem on the NFS server.
683 \item[extra] Extra string to append to the kernel command line.
684 \item[restart] Three possible options:
685 \begin{description}
686 \item[always] Always restart the domain, no matter what
687 its exit code is.
688 \item[never] Never restart the domain.
689 \item[onreboot] (restart the domain if it requests reboot).
690 \end{description}
691 \end{description}
693 It is also possible to include Python scripting commands in
694 configuration files. This is done in the \path{xmdefconfig} file in
695 order to handle the {\tt vmid} variable.
698 \chapter{Xend (Node control daemon)}
699 \label{cha:xensv}
701 The Xen Daemon (Xend) performs system management functions related to
702 virtual machines. It forms a central point of control for a machine
703 and can be controlled using an HTTP-based protocol. Xend must be
704 running in order to start and manage virtual machines.
706 Xend must be run as root because it needs access to privileged system
707 management functions. A small set of commands may be issued on the
708 Xend command line:
710 \begin{tabular}{ll}
711 \verb_# xend start_ & start Xend, if not already running \\
712 \verb_# xend stop_ & stop Xend if already running \\
713 \verb_# xend restart_ & restart Xend if running, otherwise start it \\
714 \end{tabular}
716 An SysV init script called {\tt xend} is provided to start Xend at
717 boot time. The {\tt make install} will install this script in
718 {\path{/etc/init.d} automatically. To enable it, you can make
719 symbolic links in the appropriate runlevel directories or use the {\tt
720 chkconfig} tool, where available.
722 Once Xend is running, more sophisticated administration can be done
723 using the Xensv web interface (see Chapter~\ref{cha:xensv}).
725 \chapter{Xensv (Web interface server)}
726 \label{cha:xensv}
728 Xensv is the server for the web control interface. It can be started
729 using:\\
730 \verb_# xensv start_ \\
731 and stopped using:
732 \verb_# xensv stop_ \\
733 It will automatically start Xend if it is not already running.
735 By default, Xensv will serve out the web interface on port 8080. This
736 can be changed by editing {\tt
737 /usr/lib/python2.2/site-packages/xen/sv/params.py}.
739 Once Xensv is running, the web interface can be used to manage running
740 domains and provides a user friendly domain creation wizard.
742 \chapter{The xm tool}
743 \label{cha:xm}
745 XXX Add description of arguments and switches for all the options
747 The xm tool is the primary tool for managing Xen from the console.
748 The general format of an xm command line is:
750 \begin{verbatim}
751 # xm command [switches] [arguments] [variables]
752 \end{verbatim}
754 The available {\em switches } and {\em arguments}are dependent on the
755 {\em command} chosen. The {\em variables} may be set using
756 declarations of the form {\tt variable=value} and may be used to set /
757 override any of the values in the configuration file being used,
758 including the standard variables described above and any custom
759 variables (for instance, the \path{xmdefconfig} file uses a {\tt vmid}
760 variable).
762 The available commands are as follows:
764 \begin{description}
765 \item[create] Create a new domain.
766 \item[destroy] Kill a domain immediately.
767 \item[list] List running domains.
768 \item[shutdown] Ask a domain to shutdown.
769 \item[dmesg] Fetch the Xen (not Linux!) boot output.
770 \item[consoles] Lists the available consoles.
771 \item[console] Connect to the console for a domain.
772 \item[help] Get help on xm commands.
773 \item[save] Suspend a domain to disk.
774 \item[restore] Restore a domain from disk.
775 \item[pause] Pause a domain's execution.
776 \item[unpause] Unpause a domain.
777 \item[pincpu] Pin a domain to a CPU.
778 \item[bvt] Set BVT scheduler parameters for a domain.
779 \item[bvt\_ctxallow] Set the BVT context switching allowance for the system.
780 \item[fbvt] Set the FBVT scheduler parameters for a domain.
781 \item[fbvt\_ctxallow] Set the FBVT context switching allowance for the system.
782 \item[atropos] Set the atropos parameters for a domain.
783 \item[rrobin] Set the round robin time slice for the system.
784 \item[info] Get information about the Xen host.
785 \item[call] Call a Xend HTTP API function directly.
786 \end{description}
788 \chapter{Glossary}
790 \begin{description}
791 \item[Atropos] One of the CPU schedulers provided by Xen.
792 Atropos provides domains with absolute shares
793 of the CPU, with timeliness guarantees and a
794 mechanism for sharing out ``slack time''.
796 \item[BVT] The BVT scheduler is used to give proportional
797 fair shares of the CPU to domains.
799 \item[Exokernel] A minimal piece of privileged code, similar to
800 a {\bf microkernel} but providing a more
801 `hardware-like' interface to the tasks it
802 manages. This is similar to a paravirtualising
803 VMM like {\bf Xen} but was designed as a new
804 operating system structure, rather than
805 specifically to run multiple conventional OSs.
807 \item[FBVT] A derivative of the { \bf BVT } scheduler that
808 aims to give better fairness performance to IO
809 intensive domains in competition with CPU
810 intensive domains.
812 \item[Domain] A domain is the execution context that
813 contains a running { \bf virtual machine }.
814 The relationship between virtual machines
815 and domains on Xen is similar to that between
816 programs and processes in an operating
817 system: a virtual machine is a persistent
818 entity that resides on disk (somewhat like
819 a program). When it is loaded for execution,
820 it runs in a domain. Each domain has a
821 { \bf domain ID }.
823 \item[Domain 0] The first domain to be started on a Xen
824 machine. Domain 0 is responsible for managing
825 the system.
827 \item[Domain ID] A unique identifier for a { \bf domain },
828 analogous to a process ID in an operating
829 system. Apart from domain
831 \item[Full virtualisation] An approach to virtualisation which
832 requires no modifications to the hosted
833 operating system, providing the illusion of
834 a complete system of real hardware devices.
836 \item[Hypervisor] An alternative term for { \bf VMM }, used
837 because it means ``beyond supervisor'',
838 since it is responsible for managing multiple
839 ``supervisor'' kernels.
841 \item[Microkernel] A small base of code running at the highest
842 hardware privilege level. A microkernel is
843 responsible for sharing CPU and memory (and
844 sometimes other devices) between less
845 privileged tasks running on the system.
846 This is similar to a VMM, particularly a
847 {\bf paravirtualising} VMM but typically
848 addressing a different problem space and
849 providing different kind of interface.
851 \item[NetBSD/Xen] A port of NetBSD to the Xen architecture.
853 \item[Paravirtualisation] An approach to virtualisation which requires
854 modifications to the operating system in
855 order to run in a virtual machine. Xen
856 uses paravirtualisation but preserves
857 binary compatibility for user space
858 applications.
860 \item[Virtual Machine] The environment in which a hosted operating
861 system runs, providing the abstraction of a
862 dedicated machine. A virtual machine may
863 be identical to the underlying hardware (as
864 in { \bf full virtualisation }, or it may
865 differ, as in { \bf paravirtualisation }.
867 \item[VMM] Virtual Machine Monitor - the software that
868 allows multiple virtual machines to be
869 multiplexed on a single physical machine.
871 \item[Xen] Xen is a paravirtualising virtual machine
872 monitor, developed primarily by the
873 Systems Research Group at the University
874 of Cambridge Computer Laboratory.
876 \item[XenLinux] Official name for the port of the Linux kernel
877 that runs on Xen.
879 \end{description}
881 \part{Advanced Topics}
883 XXX More to add here, including config file format
885 \chapter{Advanced Network Configuration}
887 For simple systems with a single ethernet interface with a simple
888 configuration, the default installation should work ``out of the
889 box''. More complicated network setups, for instance with multiple
890 ethernet interfaces and / or existing bridging setups will require
891 some special configuration.
893 The purpose of this chapter is to describe the mechanisms provided by
894 xend to allow a flexible configuration for Xen's virtual networking.
896 \section{Xen networking scripts}
898 Xen's virtual networking is configured by 3 shell scripts. These are
899 called automatically by Xend when certain events occur, with arguments
900 to the scripts providing further contextual information. These
901 scripts are found by default in \path{/etc/xen}. The names and
902 locations of the scripts can be configured in \path{xend-config.sxp}.
904 \subsection{\path{network}}
906 This script is called once when Xend is started and once when Xend is
907 stopped. Its job is to do any advance preparation required for the
908 Xen virtual network when Xend starts and to do any corresponding
909 cleanup when Xend exits.
911 In the default configuration, this script creates the bridge
912 ``xen-br0'' and moves eth0 onto that bridge, modifying the routing
913 accordingly.
915 In configurations where the bridge already exists, this script could
916 be replaced with a link to \path{/bin/true} (for instance).
918 When Xend exits, this script is called with the {\tt stop} argument,
919 which causes it to delete the Xen bridge and remove {\tt eth0} from
920 it, restoring the normal IP and routing configuration.
922 \subsection{\path{vif-bridge}}
924 This script is called for every domain virtual interface. This should
925 do things like configuring firewalling rules for that interface and
926 adding it to the appropriate bridge.
928 By default, this adds and removes VIFs on the default Xen bridge.
929 This script can be customized to properly deal with more complicated
930 bridging setups.
932 \chapter{Advanced Scheduling Configuration}
934 \section{Scheduler selection}
936 Xen offers a boot time choice between multiple schedulers. To select
937 a scheduler, pass the boot parameter { \tt sched=sched\_name } to Xen,
938 substituting the appropriate scheduler name. Details of the schedulers
939 and their parameters are included below; future versions of the tools
940 will provide a higher-level interface to these tools.
942 \section{Borrowed Virtual Time}
944 BVT provides proportional fair shares of the CPU time. It has been
945 observed to penalise domains that block frequently (e.g. IO intensive
946 domains), so the FBVT derivative has been included as an alternative.
948 \subsection{Global Parameters}
950 \begin{description}
951 \item[ctx\_allow]
952 the context switch allowance is similar to the "quantum"
953 in traditional schedulers. It is the minimum time that
954 a scheduled domain will be allowed to run before be
955 pre-empted. This prevents thrashing of the CPU.
956 \end{description}
958 \subsection{Per-domain parameters}
960 \begin{description}
961 \item[mcuadv]
962 the MCU (Minimum Charging Unit) advance determines the
963 proportional share of the CPU that a domain receives. It
964 is set inversely proportionally to a domain's sharing weight.
965 \item[warp]
966 the amount of "virtual time" the domain is allowed to warp
967 backwards
968 \item[warpl]
969 the warp limit is the maximum time a domain can run warped for
970 \item[warpu]
971 the unwarp requirement is the minimum time a domain must
972 run unwarped for before it can warp again
973 \end{description}
975 \section{Fair Borrowed Virtual Time}
977 This is a derivative for BVT that aims to provide better fairness for
978 IO intensive domains as well as for CPU intensive domains.
980 \subsection{Global Parameters}
982 Same as for BVT.
984 \subsection{Per-domain parameters}
986 Same as for BVT.
988 \section{Atropos}
990 Atropos is a Soft Real Time scheduler. It provides guarantees about
991 absolute shares of the CPU (with a method for optionally sharing out
992 slack CPU time on a best-effort basis) and can provide timeliness
993 guarantees for latency-sensitive domains.
995 \subsection{Per-domain parameters}
997 \begin{description}
998 \item[slice]
999 The length of time per period that a domain is guaranteed.
1000 \item[period]
1001 The period over which a domain is guaranteed to receive
1002 its slice of CPU time.
1003 \item[latency]
1004 The latency hint is used to control how soon after
1005 waking up a domain should be scheduled.
1006 \item[xtratime]
1007 This is a true (1) / false (0) flag that specifies whether
1008 a domain should be allowed a share of the system slack time.
1009 \end{description}
1011 \section{Round Robin}
1013 The Round Robin scheduler is included as a simple demonstration of
1014 Xen's internal scheduler API. It is not intended for production use
1015 --- the other schedulers included are all more general and should give
1016 higher throughput.
1018 \subsection{Global parameters}
1020 \begin{description}
1021 \item[rr\_slice]
1022 The maximum time each domain runs before the next
1023 scheduling decision is made.
1024 \end{description}
1026 \chapter{Privileged domains}
1028 There are two possible types of privileges: IO privileges and
1029 administration privileges.
1031 \section{Driver domains (IO Privileges)}
1033 IO privileges can be assigned to allow a domain to drive PCI devices
1034 itself. This is used for to support driver domains.
1036 Setting backend privileges is currently only supported in SXP format
1037 config files (??? is this true - there's nothing in xmdefconfig,
1038 anyhow). To allow a domain to function as a backend for others,
1039 somewhere within the {\tt vm} element of its configuration file must
1040 be a {\tt backend} element of the form {\tt (backend ({\em type}))}
1041 where {\tt \em type} may be either {\tt netif} or {\tt blkif},
1042 according to the type of virtual device this domain will service.
1043 After this domain has been built, Xend will connect all new and
1044 existing {\em virtual} devices (of the appropriate type) to that
1045 backend.
1047 Note that:
1048 \begin{itemize}
1049 \item a block backend cannot import virtual block devices from other
1050 domains
1051 \item a network backend cannot import virtual network devices from
1052 other domains
1053 \end{itemize}
1055 Thus (particularly in the case of block backends, which cannot import
1056 a virtual block device as their root filesystem), you may need to boot
1057 a backend domain from a ramdisk or a network device.
1059 The privilege to drive PCI devices may also be specified on a
1060 per-device basis. Xen will assign the minimal set of hardware
1061 privileges to a domain that are required to control its devices. This
1062 can be configured in either format of configuration file:
1064 \begin{itemize}
1065 \item SXP Format:
1066 Include {\tt device} elements
1067 {\tt (device (pci (bus {\em x}) (dev {\em y}) (func {\em z}))) } \\
1068 inside the top-level {\tt vm} element. Each one specifies the address
1069 of a device this domain is allowed to drive ---
1070 the numbers {\em x},{\em y} and {\em z} may be in either decimal or
1071 hexadecimal format.
1072 \item Flat Format: Include a list of PCI device addresses of the
1073 format: \\ {\tt pci = ['x,y,z', ...] } \\ where each element in the
1074 list is a string specifying the components of the PCI device
1075 address, separated by commas. The components ({\tt \em x}, {\tt \em
1076 y} and {\tt \em z}) of the list may be formatted as either decimal
1077 or hexadecimal.
1078 \end{itemize}
1080 \section{Administration Domains}
1082 Administration privileges allow a domain to use the ``dom0
1083 operations'' (so called because they are usually available only to
1084 domain 0). A privileged domain can build other domains, set scheduling
1085 parameters, etc.
1087 % Support for other administrative domains is not yet available...
1089 \chapter{Xen build options}
1091 For most users, the default build of Xen will be adequate. For some
1092 advanced uses, Xen provides a number of build-time options:
1094 At build time, these options should be set as environment variables or
1095 passed on make's command-line. For example:
1097 \begin{verbatim}
1098 export option=y; make
1099 option=y make
1100 make option1=y option2=y
1101 \end{verbatim}
1103 \section{List of options}
1105 {\bf debug=y }\\
1106 Enable debug assertions and console output.
1107 (Primarily useful for tracing bugs in Xen). \\
1108 {\bf debugger=y }\\
1109 Enable the in-Xen pervasive debugger (PDB).
1110 This can be used to debug Xen, guest OSes, and
1111 applications. For more information see the
1112 XenDebugger-HOWTO. \\
1113 {\bf perfc=y }\\
1114 Enable performance-counters for significant events
1115 within Xen. The counts can be reset or displayed
1116 on Xen's console via console control keys. \\
1117 {\bf trace=y }\\
1118 Enable per-cpu trace buffers which log a range of
1119 events within Xen for collection by control
1120 software. For more information see the chapter on debugging,
1121 in the Xen Interface Manual.
1123 \chapter{Xen boot options}
1125 These options are used to configure Xen's behaviour at runtime. They
1126 should be appended to Xen's command line, either manually or by
1127 editing \path{grub.conf}.
1129 \section{List of options}
1131 {\bf ignorebiostables }\\
1132 Disable parsing of BIOS-supplied tables. This may help with some
1133 chipsets that aren't fully supported by Xen. If you specify this
1134 option then ACPI tables are also ignored, and SMP support is
1135 disabled. \\
1137 {\bf noreboot } \\
1138 Don't reboot the machine automatically on errors. This is
1139 useful to catch debug output if you aren't catching console messages
1140 via the serial line. \\
1142 {\bf nosmp } \\
1143 Disable SMP support.
1144 This option is implied by 'ignorebiostables'. \\
1146 {\bf noacpi } \\
1147 Disable ACPI tables, which confuse Xen on some chipsets.
1148 This option is implied by 'ignorebiostables'. \\
1150 {\bf watchdog } \\
1151 Enable NMI watchdog which can report certain failures. \\
1153 {\bf noht } \\
1154 Disable Hyperthreading. \\
1156 {\bf badpage=$<$page number$>$[,$<$page number$>$] } \\
1157 Specify a list of pages not to be allocated for use
1158 because they contain bad bytes. For example, if your
1159 memory tester says that byte 0x12345678 is bad, you would
1160 place 'badpage=0x12345' on Xen's command line (i.e., the
1161 last three digits of the byte address are not
1162 included!). \\
1164 {\bf com1=$<$baud$>$,DPS[,$<$io\_base$>$,$<$irq$>$] \\
1165 com2=$<$baud$>$,DPS[,$<$io\_base$>$,$<$irq$>$] } \\
1166 Xen supports up to two 16550-compatible serial ports.
1167 For example: 'com1=9600,8n1,0x408,5' maps COM1 to a
1168 9600-baud port, 8 data bits, no parity, 1 stop bit,
1169 I/O port base 0x408, IRQ 5.
1170 If the I/O base and IRQ are standard (com1:0x3f8,4;
1171 com2:0x2f8,3) then they need not be specified. \\
1173 {\bf console=$<$specifier list$>$ } \\
1174 Specify the destination for Xen console I/O.
1175 This is a comma-separated list of, for example:
1176 \begin{description}
1177 \item[vga] use VGA console and allow keyboard input
1178 \item[com1] use serial port com1
1179 \item[com2H] use serial port com2. Transmitted chars will
1180 have the MSB set. Received chars must have
1181 MSB set.
1182 \item[com2L] use serial port com2. Transmitted chars will
1183 have the MSB cleared. Received chars must
1184 have MSB cleared.
1185 \end{description}
1186 The latter two examples allow a single port to be
1187 shared by two subsystems (e.g. console and
1188 debugger). Sharing is controlled by MSB of each
1189 transmitted/received character.
1190 [NB. Default for this option is 'com1,tty'] \\
1192 {\bf conswitch=$<$switch-char$><$auto-switch-char$>$ } \\
1193 Specify how to switch serial-console input between
1194 Xen and DOM0. The required sequence is CTRL-<switch-char>
1195 pressed three times. Specifying '`' disables switching.
1196 The <auto-switch-char> specifies whether Xen should
1197 auto-switch input to DOM0 when it boots -- if it is 'x'
1198 then auto-switching is disabled. Any other value, or
1199 omitting the character, enables auto-switching.
1200 [NB. Default for this option is 'a'] \\
1202 {\bf nmi=xxx } \\
1203 Specify what to do with an NMI parity or I/O error. \\
1204 'nmi=fatal': Xen prints a diagnostic and then hangs. \\
1205 'nmi=dom0': Inform DOM0 of the NMI. \\
1206 'nmi=ignore': Ignore the NMI. \\
1208 {\bf dom0\_mem=xxx } \\
1209 Set the maximum amount of memory for domain0. \\
1211 {\bf tbuf\_size=xxx } \\
1212 Set the size of the per-cpu trace buffers, in pages
1213 (default 1). Note that the trace buffers are only
1214 enabled in debug builds. Most users can ignore
1215 this feature completely. \\
1217 {\bf sched=xxx } \\
1218 Select the CPU scheduler Xen should use. The current
1219 possibilities are 'bvt', 'atropos' and 'rrobin'. The
1220 default is 'bvt'. For more information see
1221 Sched-HOWTO.txt. \\
1223 {\bf pci\_dom0\_hide=(xx.xx.x)(yy.yy.y)... } \\
1224 Hide selected PCI devices from domain 0 (for instance, to stop it
1225 taking ownership of them so that they can be driven by another
1226 domain). Device IDs should be given in hex format. Bridge devices do
1227 not need to be hidden --- they are hidden implicitly, since guest OSes
1228 do not need to configure them.
1230 \chapter{Further Support}
1232 If you have questions that are not answered by this manual, the
1233 sources of information listed below may be of interest to you. Note
1234 that bug reports, suggestions and contributions related to the
1235 software (or the documentation) should be sent to the Xen developers'
1236 mailing list (address below).
1238 \section{Other documentation}
1240 For developers interested in porting operating systems to Xen, the
1241 {\em Xen Interface Manual} is distributed in the \path{docs/}
1242 directory of the Xen source distribution. Various HOWTOs are
1243 available in \path{docs/HOWTOS} but this content is being integrated
1244 into this manual.
1246 \section{Online references}
1248 The official Xen web site is found at: \\
1249 {\tt
1250 http://www.cl.cam.ac.uk/Research/SRG/netos/xen/] }.
1252 Links to other
1253 documentation sources are listed at: \\ {\tt
1254 http://www.cl.cam.ac.uk/Research/SRG/netos/xen/documentation.html}.
1256 \section{Mailing lists}
1258 There are currently two official Xen mailing lists:
1260 \begin{description}
1261 \item[xen-devel@lists.sourceforge.net] Used for development
1262 discussions and requests for help. Subscribe at: \\
1263 {\tt http://lists.sourceforge.net/mailman/listinfo/xen-devel}
1264 \item[xen-announce@lists.sourceforge.net] Used for announcements only.
1265 Subscribe at: \\
1266 {\tt http://lists.sourceforge.net/mailman/listinfo/xen-announce}
1267 \end{description}
1269 Although there is no specific user support list, the developers try to
1270 assist users who post on xen-devel. As the bulk of traffic on this
1271 list increases, a dedicated user support list may be introduced.
1273 \end{document}