view docs/src/user.tex @ 8323:566395e5a14f

Minor tidy.

Signed-off-by: Ewan Mellor <ewan@xensource.com>
author emellor@leeni.uk.xensource.com
date Mon Dec 12 16:32:50 2005 +0000 (2005-12-12)
parents 1f6ff996a9fe
children 55d464295da3
line source
1 \documentclass[11pt,twoside,final,openright]{report}
2 \usepackage{a4,graphicx,html,parskip,setspace,times,xspace,url}
3 \setstretch{1.15}
5 \renewcommand{\ttdefault}{pcr}
7 \def\Xend{{Xend}\xspace}
8 \def\xend{{xend}\xspace}
10 \latexhtml{\renewcommand{\path}[1]{{\small {\tt #1}}}}{\renewcommand{\path}[1]{{\tt #1}}}
13 \begin{document}
16 \pagestyle{empty}
17 \begin{center}
18 \vspace*{\fill}
19 \includegraphics{figs/xenlogo.eps}
20 \vfill
21 \vfill
22 \vfill
23 \begin{tabular}{l}
24 {\Huge \bf Users' Manual} \\[4mm]
25 {\huge Xen v3.0} \\[80mm]
26 \end{tabular}
27 \end{center}
29 {\bf DISCLAIMER: This documentation is always under active development
30 and as such there may be mistakes and omissions --- watch out for
31 these and please report any you find to the developers' mailing list,
32 xen-devel@lists.xensource.com. The latest version is always available
33 on-line. Contributions of material, suggestions and corrections are
34 welcome.}
36 \vfill
37 \clearpage
41 \pagestyle{empty}
43 \vspace*{\fill}
45 Xen is Copyright \copyright 2002-2005, University of Cambridge, UK, XenSource
46 Inc., IBM Corp., Hewlett-Packard Co., Intel Corp., AMD Inc., and others. All
47 rights reserved.
49 Xen is an open-source project. Most portions of Xen are licensed for copying
50 under the terms of the GNU General Public License, version 2. Other portions
51 are licensed under the terms of the GNU Lesser General Public License, the
52 Zope Public License 2.0, or under ``BSD-style'' licenses. Please refer to the
53 COPYING file for details.
55 \cleardoublepage
59 \pagestyle{plain}
60 \pagenumbering{roman}
61 { \parskip 0pt plus 1pt
62 \tableofcontents }
63 \cleardoublepage
67 \pagenumbering{arabic}
68 \raggedbottom
69 \widowpenalty=10000
70 \clubpenalty=10000
71 \parindent=0pt
72 \parskip=5pt
73 \renewcommand{\topfraction}{.8}
74 \renewcommand{\bottomfraction}{.8}
75 \renewcommand{\textfraction}{.2}
76 \renewcommand{\floatpagefraction}{.8}
77 \setstretch{1.1}
80 %% Chapter Introduction moved to introduction.tex
81 \chapter{Introduction}
84 Xen is an open-source \emph{para-virtualizing} virtual machine monitor
85 (VMM), or ``hypervisor'', for the x86 processor architecture. Xen can
86 securely execute multiple virtual machines on a single physical system
87 with close-to-native performance. Xen facilitates enterprise-grade
88 functionality, including:
90 \begin{itemize}
91 \item Virtual machines with performance close to native hardware.
92 \item Live migration of running virtual machines between physical hosts.
93 \item Up to 32 virtual CPUs per guest virtual machine, with VCPU hotplug.
94 \item x86/32, x86/32 with PAE, and x86/64 platform support.
95 \item Intel Virtualization Technology (VT-x) for unmodified guest operating systems (including Microsoft Windows).
96 \item Excellent hardware support (supports almost all Linux device
97 drivers).
98 \end{itemize}
101 \section{Usage Scenarios}
103 Usage scenarios for Xen include:
105 \begin{description}
106 \item [Server Consolidation.] Move multiple servers onto a single
107 physical host with performance and fault isolation provided at the
108 virtual machine boundaries.
109 \item [Hardware Independence.] Allow legacy applications and operating
110 systems to exploit new hardware.
111 \item [Multiple OS configurations.] Run multiple operating systems
112 simultaneously, for development or testing purposes.
113 \item [Kernel Development.] Test and debug kernel modifications in a
114 sand-boxed virtual machine --- no need for a separate test machine.
115 \item [Cluster Computing.] Management at VM granularity provides more
116 flexibility than separately managing each physical host, but better
117 control and isolation than single-system image solutions,
118 particularly by using live migration for load balancing.
119 \item [Hardware support for custom OSes.] Allow development of new
120 OSes while benefiting from the wide-ranging hardware support of
121 existing OSes such as Linux.
122 \end{description}
125 \section{Operating System Support}
127 Para-virtualization permits very high performance virtualization, even
128 on architectures like x86 that are traditionally very hard to
129 virtualize.
131 This approach requires operating systems to be \emph{ported} to run on
132 Xen. Porting an OS to run on Xen is similar to supporting a new
133 hardware platform, however the process is simplified because the
134 para-virtual machine architecture is very similar to the underlying
135 native hardware. Even though operating system kernels must explicitly
136 support Xen, a key feature is that user space applications and
137 libraries \emph{do not} require modification.
139 With hardware CPU virtualization as provided by Intel VT and AMD
140 Pacifica technology, the ability to run an unmodified guest OS kernel
141 is available. No porting of the OS is required, although some
142 additional driver support is necessary within Xen itself. Unlike
143 traditional full virtualization hypervisors, which suffer a tremendous
144 performance overhead, the combination of Xen and VT or Xen and
145 Pacifica technology complement one another to offer superb performance
146 for para-virtualized guest operating systems and full support for
147 unmodified guests running natively on the processor. Full support for
148 VT and Pacifica chipsets will appear in early 2006.
150 Paravirtualized Xen support is available for increasingly many
151 operating systems: currently, mature Linux support is available and
152 included in the standard distribution. Other OS ports---including
153 NetBSD, FreeBSD and Solaris x86 v10---are nearing completion.
156 \section{Hardware Support}
158 Xen currently runs on the x86 architecture, requiring a ``P6'' or
159 newer processor (e.g.\ Pentium Pro, Celeron, Pentium~II, Pentium~III,
160 Pentium~IV, Xeon, AMD~Athlon, AMD~Duron). Multiprocessor machines are
161 supported, and there is support for HyperThreading (SMT). In
162 addition, ports to IA64 and Power architectures are in progress.
164 The default 32-bit Xen supports up to 4GB of memory. However Xen 3.0
165 adds support for Intel's Physical Addressing Extensions (PAE), which
166 enable x86/32 machines to address up to 64 GB of physical memory. Xen
167 3.0 also supports x86/64 platforms such as Intel EM64T and AMD Opteron
168 which can currently address up to 1TB of physical memory.
170 Xen offloads most of the hardware support issues to the guest OS
171 running in the \emph{Domain~0} management virtual machine. Xen itself
172 contains only the code required to detect and start secondary
173 processors, set up interrupt routing, and perform PCI bus
174 enumeration. Device drivers run within a privileged guest OS rather
175 than within Xen itself. This approach provides compatibility with the
176 majority of device hardware supported by Linux. The default XenLinux
177 build contains support for most server-class network and disk
178 hardware, but you can add support for other hardware by configuring
179 your XenLinux kernel in the normal way.
182 \section{Structure of a Xen-Based System}
184 A Xen system has multiple layers, the lowest and most privileged of
185 which is Xen itself.
187 Xen may host multiple \emph{guest} operating systems, each of which is
188 executed within a secure virtual machine. In Xen terminology, a
189 \emph{domain}. Domains are scheduled by Xen to make effective use of the
190 available physical CPUs. Each guest OS manages its own applications.
191 This management includes the responsibility of scheduling each
192 application within the time allotted to the VM by Xen.
194 The first domain, \emph{domain~0}, is created automatically when the
195 system boots and has special management privileges. Domain~0 builds
196 other domains and manages their virtual devices. It also performs
197 administrative tasks such as suspending, resuming and migrating other
198 virtual machines.
200 Within domain~0, a process called \emph{xend} runs to manage the system.
201 \Xend\ is responsible for managing virtual machines and providing access
202 to their consoles. Commands are issued to \xend\ over an HTTP interface,
203 via a command-line tool.
206 \section{History}
208 Xen was originally developed by the Systems Research Group at the
209 University of Cambridge Computer Laboratory as part of the XenoServers
210 project, funded by the UK-EPSRC\@.
212 XenoServers aim to provide a ``public infrastructure for global
213 distributed computing''. Xen plays a key part in that, allowing one to
214 efficiently partition a single machine to enable multiple independent
215 clients to run their operating systems and applications in an
216 environment. This environment provides protection, resource isolation
217 and accounting. The project web page contains further information along
218 with pointers to papers and technical reports:
219 \path{http://www.cl.cam.ac.uk/xeno}
221 Xen has grown into a fully-fledged project in its own right, enabling us
222 to investigate interesting research issues regarding the best techniques
223 for virtualizing resources such as the CPU, memory, disk and network.
224 Project contributors now include XenSource, Intel, IBM, HP, AMD, Novell,
225 RedHat.
227 Xen was first described in a paper presented at SOSP in
228 2003\footnote{\tt
229 http://www.cl.cam.ac.uk/netos/papers/2003-xensosp.pdf}, and the first
230 public release (1.0) was made that October. Since then, Xen has
231 significantly matured and is now used in production scenarios on many
232 sites.
234 \section{What's New}
236 Xen 3.0.0 offers:
238 \begin{itemize}
239 \item Support for up to 32-way SMP guest operating systems
240 \item Intel (Physical Addressing Extensions) PAE to support 32-bit
241 servers with more than 4GB physical memory
242 \item x86/64 support (Intel EM64T, AMD Opteron)
243 \item Intel VT-x support to enable the running of unmodified guest
244 operating systems (Windows XP/2003, Legacy Linux)
245 \item Enhanced control tools
246 \item Improved ACPI support
247 \item AGP/DRM graphics
248 \end{itemize}
251 Xen 3.0 features greatly enhanced hardware support, configuration
252 flexibility, usability and a larger complement of supported operating
253 systems. This latest release takes Xen a step closer to being the
254 definitive open source solution for virtualization.
258 \part{Installation}
260 %% Chapter Basic Installation
261 \chapter{Basic Installation}
263 The Xen distribution includes three main components: Xen itself, ports
264 of Linux and NetBSD to run on Xen, and the userspace tools required to
265 manage a Xen-based system. This chapter describes how to install the
266 Xen~3.0 distribution from source. Alternatively, there may be pre-built
267 packages available as part of your operating system distribution.
270 \section{Prerequisites}
271 \label{sec:prerequisites}
273 The following is a full list of prerequisites. Items marked `$\dag$' are
274 required by the \xend\ control tools, and hence required if you want to
275 run more than one virtual machine; items marked `$*$' are only required
276 if you wish to build from source.
277 \begin{itemize}
278 \item A working Linux distribution using the GRUB bootloader and running
279 on a P6-class or newer CPU\@.
280 \item [$\dag$] The \path{iproute2} package.
281 \item [$\dag$] The Linux bridge-utils\footnote{Available from {\tt
282 http://bridge.sourceforge.net}} (e.g., \path{/sbin/brctl})
283 \item [$\dag$] The Linux hotplug system\footnote{Available from {\tt
284 http://linux-hotplug.sourceforge.net/}} (e.g.,
285 \path{/sbin/hotplug} and related scripts). On newer distributions,
286 this is included alongside the Linux udev system\footnote{See {\tt
287 http://www.kernel.org/pub/linux/utils/kernel/hotplug/udev.html/}}.
288 \item [$*$] Build tools (gcc v3.2.x or v3.3.x, binutils, GNU make).
289 \item [$*$] Development installation of zlib (e.g.,\ zlib-dev).
290 \item [$*$] Development installation of Python v2.2 or later (e.g.,\
291 python-dev).
292 \item [$*$] \LaTeX\ and transfig are required to build the
293 documentation.
294 \end{itemize}
296 Once you have satisfied these prerequisites, you can now install either
297 a binary or source distribution of Xen.
299 \section{Installing from Binary Tarball}
301 Pre-built tarballs are available for download from the XenSource downloads
302 page:
303 \begin{quote} {\tt http://www.xensource.com/downloads/}
304 \end{quote}
306 Once you've downloaded the tarball, simply unpack and install:
307 \begin{verbatim}
308 # tar zxvf xen-3.0-install.tgz
309 # cd xen-3.0-install
310 # sh ./install.sh
311 \end{verbatim}
313 Once you've installed the binaries you need to configure your system as
314 described in Section~\ref{s:configure}.
316 \section{Installing from RPMs}
317 Pre-built RPMs are available for download from the XenSource downloads
318 page:
319 \begin{quote} {\tt http://www.xensource.com/downloads/}
320 \end{quote}
322 Once you've downloaded the RPMs, you typically install them via the
323 RPM commands:
325 \verb|# rpm -iv rpmname|
327 See the instructions and the Release Notes for each RPM set referenced at:
328 \begin{quote}
329 {\tt http://www.xensource.com/downloads/}.
330 \end{quote}
332 \section{Installing from Source}
334 This section describes how to obtain, build and install Xen from source.
336 \subsection{Obtaining the Source}
338 The Xen source tree is available as either a compressed source tarball
339 or as a clone of our master Mercurial repository.
341 \begin{description}
342 \item[Obtaining the Source Tarball]\mbox{} \\
343 Stable versions and daily snapshots of the Xen source tree are
344 available from the Xen download page:
345 \begin{quote} {\tt \tt http://www.xensource.com/downloads/}
346 \end{quote}
347 \item[Obtaining the source via Mercurial]\mbox{} \\
348 The source tree may also be obtained via the public Mercurial
349 repository at:
350 \begin{quote}{\tt http://xenbits.xensource.com}
351 \end{quote} See the instructions and the Getting Started Guide
352 referenced at:
353 \begin{quote}
354 {\tt http://www.xensource.com/downloads/}
355 \end{quote}
356 \end{description}
358 % \section{The distribution}
359 %
360 % The Xen source code repository is structured as follows:
361 %
362 % \begin{description}
363 % \item[\path{tools/}] Xen node controller daemon (Xend), command line
364 % tools, control libraries
365 % \item[\path{xen/}] The Xen VMM.
366 % \item[\path{buildconfigs/}] Build configuration files
367 % \item[\path{linux-*-xen-sparse/}] Xen support for Linux.
368 % \item[\path{patches/}] Experimental patches for Linux.
369 % \item[\path{docs/}] Various documentation files for users and
370 % developers.
371 % \item[\path{extras/}] Bonus extras.
372 % \end{description}
374 \subsection{Building from Source}
376 The top-level Xen Makefile includes a target ``world'' that will do the
377 following:
379 \begin{itemize}
380 \item Build Xen.
381 \item Build the control tools, including \xend.
382 \item Download (if necessary) and unpack the Linux 2.6 source code, and
383 patch it for use with Xen.
384 \item Build a Linux kernel to use in domain~0 and a smaller unprivileged
385 kernel, which can be used for unprivileged virtual machines.
386 \end{itemize}
388 After the build has completed you should have a top-level directory
389 called \path{dist/} in which all resulting targets will be placed. Of
390 particular interest are the two XenLinux kernel images, one with a
391 ``-xen0'' extension which contains hardware device drivers and drivers
392 for Xen's virtual devices, and one with a ``-xenU'' extension that
393 just contains the virtual ones. These are found in
394 \path{dist/install/boot/} along with the image for Xen itself and the
395 configuration files used during the build.
397 %The NetBSD port can be built using:
398 %\begin{quote}
399 %\begin{verbatim}
400 %# make netbsd20
401 %\end{verbatim}\end{quote}
402 %NetBSD port is built using a snapshot of the netbsd-2-0 cvs branch.
403 %The snapshot is downloaded as part of the build process if it is not
404 %yet present in the \path{NETBSD\_SRC\_PATH} search path. The build
405 %process also downloads a toolchain which includes all of the tools
406 %necessary to build the NetBSD kernel under Linux.
408 To customize the set of kernels built you need to edit the top-level
409 Makefile. Look for the line:
410 \begin{quote}
411 \begin{verbatim}
412 KERNELS ?= linux-2.6-xen0 linux-2.6-xenU
413 \end{verbatim}
414 \end{quote}
416 You can edit this line to include any set of operating system kernels
417 which have configurations in the top-level \path{buildconfigs/}
418 directory.
420 %% Inspect the Makefile if you want to see what goes on during a
421 %% build. Building Xen and the tools is straightforward, but XenLinux
422 %% is more complicated. The makefile needs a `pristine' Linux kernel
423 %% tree to which it will then add the Xen architecture files. You can
424 %% tell the makefile the location of the appropriate Linux compressed
425 %% tar file by
426 %% setting the LINUX\_SRC environment variable, e.g. \\
427 %% \verb!# LINUX_SRC=/tmp/linux-2.6.11.tar.bz2 make world! \\ or by
428 %% placing the tar file somewhere in the search path of {\tt
429 %% LINUX\_SRC\_PATH} which defaults to `{\tt .:..}'. If the
430 %% makefile can't find a suitable kernel tar file it attempts to
431 %% download it from kernel.org (this won't work if you're behind a
432 %% firewall).
434 %% After untaring the pristine kernel tree, the makefile uses the {\tt
435 %% mkbuildtree} script to add the Xen patches to the kernel.
437 %% \framebox{\parbox{5in}{
438 %% {\bf Distro specific:} \\
439 %% {\it Gentoo} --- if not using udev (most installations,
440 %% currently), you'll need to enable devfs and devfs mount at boot
441 %% time in the xen0 config. }}
443 \subsection{Custom Kernels}
445 % If you have an SMP machine you may wish to give the {\tt '-j4'}
446 % argument to make to get a parallel build.
448 If you wish to build a customized XenLinux kernel (e.g.\ to support
449 additional devices or enable distribution-required features), you can
450 use the standard Linux configuration mechanisms, specifying that the
451 architecture being built for is \path{xen}, e.g:
452 \begin{quote}
453 \begin{verbatim}
454 # cd linux-2.6.12-xen0
455 # make ARCH=xen xconfig
456 # cd ..
457 # make
458 \end{verbatim}
459 \end{quote}
461 You can also copy an existing Linux configuration (\path{.config}) into
462 e.g.\ \path{linux-2.6.12-xen0} and execute:
463 \begin{quote}
464 \begin{verbatim}
465 # make ARCH=xen oldconfig
466 \end{verbatim}
467 \end{quote}
469 You may be prompted with some Xen-specific options. We advise accepting
470 the defaults for these options.
472 Note that the only difference between the two types of Linux kernels
473 that are built is the configuration file used for each. The ``U''
474 suffixed (unprivileged) versions don't contain any of the physical
475 hardware device drivers, leading to a 30\% reduction in size; hence you
476 may prefer these for your non-privileged domains. The ``0'' suffixed
477 privileged versions can be used to boot the system, as well as in driver
478 domains and unprivileged domains.
480 \subsection{Installing Generated Binaries}
482 The files produced by the build process are stored under the
483 \path{dist/install/} directory. To install them in their default
484 locations, do:
485 \begin{quote}
486 \begin{verbatim}
487 # make install
488 \end{verbatim}
489 \end{quote}
491 Alternatively, users with special installation requirements may wish to
492 install them manually by copying the files to their appropriate
493 destinations.
495 %% Files in \path{install/boot/} include:
496 %% \begin{itemize}
497 %% \item \path{install/boot/xen-3.0.gz} Link to the Xen 'kernel'
498 %% \item \path{install/boot/vmlinuz-2.6-xen0} Link to domain 0
499 %% XenLinux kernel
500 %% \item \path{install/boot/vmlinuz-2.6-xenU} Link to unprivileged
501 %% XenLinux kernel
502 %% \end{itemize}
504 The \path{dist/install/boot} directory will also contain the config
505 files used for building the XenLinux kernels, and also versions of Xen
506 and XenLinux kernels that contain debug symbols such as
507 (\path{xen-syms-3.0.0} and \path{vmlinux-syms-}) which are
508 essential for interpreting crash dumps. Retain these files as the
509 developers may wish to see them if you post on the mailing list.
512 \section{Configuration}
513 \label{s:configure}
515 Once you have built and installed the Xen distribution, it is simple to
516 prepare the machine for booting and running Xen.
518 \subsection{GRUB Configuration}
520 An entry should be added to \path{grub.conf} (often found under
521 \path{/boot/} or \path{/boot/grub/}) to allow Xen / XenLinux to boot.
522 This file is sometimes called \path{menu.lst}, depending on your
523 distribution. The entry should look something like the following:
525 %% KMSelf Thu Dec 1 19:06:13 PST 2005 262144 is useful for RHEL/RH and
526 %% related Dom0s.
527 {\small
528 \begin{verbatim}
529 title Xen 3.0 / XenLinux 2.6
530 kernel /boot/xen-3.0.gz dom0_mem=262144
531 module /boot/vmlinuz-2.6-xen0 root=/dev/sda4 ro console=tty0
532 \end{verbatim}
533 }
535 The kernel line tells GRUB where to find Xen itself and what boot
536 parameters should be passed to it (in this case, setting the domain~0
537 memory allocation in kilobytes and the settings for the serial port).
538 For more details on the various Xen boot parameters see
539 Section~\ref{s:xboot}.
541 The module line of the configuration describes the location of the
542 XenLinux kernel that Xen should start and the parameters that should be
543 passed to it. These are standard Linux parameters, identifying the root
544 device and specifying it be initially mounted read only and instructing
545 that console output be sent to the screen. Some distributions such as
546 SuSE do not require the \path{ro} parameter.
548 %% \framebox{\parbox{5in}{
549 %% {\bf Distro specific:} \\
550 %% {\it SuSE} --- Omit the {\tt ro} option from the XenLinux
551 %% kernel command line, since the partition won't be remounted rw
552 %% during boot. }}
554 To use an initrd, add another \path{module} line to the configuration,
555 like: {\small
556 \begin{verbatim}
557 module /boot/my_initrd.gz
558 \end{verbatim}
559 }
561 %% KMSelf Thu Dec 1 19:05:30 PST 2005 Other configs as an appendix?
563 When installing a new kernel, it is recommended that you do not delete
564 existing menu options from \path{menu.lst}, as you may wish to boot your
565 old Linux kernel in future, particularly if you have problems.
567 \subsection{Serial Console (optional)}
569 Serial console access allows you to manage, monitor, and interact with
570 your system over a serial console. This can allow access from another
571 nearby system via a null-modem (``LapLink'') cable or remotely via a serial
572 concentrator.
574 You system's BIOS, bootloader (GRUB), Xen, Linux, and login access must
575 each be individually configured for serial console access. It is
576 \emph{not} strictly necessary to have each component fully functional,
577 but it can be quite useful.
579 For general information on serial console configuration under Linux,
580 refer to the ``Remote Serial Console HOWTO'' at The Linux Documentation
581 Project: \url{http://www.tldp.org}
583 \subsubsection{Serial Console BIOS configuration}
585 Enabling system serial console output neither enables nor disables
586 serial capabilities in GRUB, Xen, or Linux, but may make remote
587 management of your system more convenient by displaying POST and other
588 boot messages over serial port and allowing remote BIOS configuration.
590 Refer to your hardware vendor's documentation for capabilities and
591 procedures to enable BIOS serial redirection.
594 \subsubsection{Serial Console GRUB configuration}
596 Enabling GRUB serial console output neither enables nor disables Xen or
597 Linux serial capabilities, but may made remote management of your system
598 more convenient by displaying GRUB prompts, menus, and actions over
599 serial port and allowing remote GRUB management.
601 Adding the following two lines to your GRUB configuration file,
602 typically either \path{/boot/grub/menu.lst} or \path{/boot/grub/grub.conf}
603 depending on your distro, will enable GRUB serial output.
605 \begin{quote}
606 {\small \begin{verbatim}
607 serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1
608 terminal --timeout=10 serial console
609 \end{verbatim}}
610 \end{quote}
612 Note that when both the serial port and the local monitor and keyboard
613 are enabled, the text ``\emph{Press any key to continue}'' will appear
614 at both. Pressing a key on one device will cause GRUB to display to
615 that device. The other device will see no output. If no key is
616 pressed before the timeout period expires, the system will boot to the
617 default GRUB boot entry.
619 Please refer to the GRUB documentation for further information.
622 \subsubsection{Serial Console Xen configuration}
624 Enabling Xen serial console output neither enables nor disables Linux
625 kernel output or logging in to Linux over serial port. It does however
626 allow you to monitor and log the Xen boot process via serial console and
627 can be very useful in debugging.
629 %% kernel /boot/xen-2.0.gz dom0_mem=131072 com1=115200,8n1
630 %% module /boot/vmlinuz-2.6-xen0 root=/dev/sda4 ro
632 In order to configure Xen serial console output, it is necessary to
633 add a boot option to your GRUB config; e.g.\ replace the previous
634 example kernel line with:
635 \begin{quote} {\small \begin{verbatim}
636 kernel /boot/xen.gz dom0_mem=131072 com1=115200,8n1
637 \end{verbatim}}
638 \end{quote}
640 This configures Xen to output on COM1 at 115,200 baud, 8 data bits, 1
641 stop bit and no parity. Modify these parameters for your environment.
643 One can also configure XenLinux to share the serial console; to achieve
644 this append ``\path{console=ttyS0}'' to your module line.
647 \subsubsection{Serial Console Linux configuration}
649 Enabling Linux serial console output at boot neither enables nor
650 disables logging in to Linux over serial port. It does however allow
651 you to monitor and log the Linux boot process via serial console and can be
652 very useful in debugging.
654 To enable Linux output at boot time, add the parameter
655 \path{console=ttyS0} (or ttyS1, ttyS2, etc.) to your kernel GRUB line.
656 Under Xen, this might be:
657 \begin{quote}
658 {\footnotesize \begin{verbatim}
659 module /vmlinuz-2.6-xen0 ro root=/dev/VolGroup00/LogVol00 \
660 console=ttyS0, 115200
661 \end{verbatim}}
662 \end{quote}
663 to enable output over ttyS0 at 115200 baud.
667 \subsubsection{Serial Console Login configuration}
669 Logging in to Linux via serial console, under Xen or otherwise, requires
670 specifying a login prompt be started on the serial port. To permit root
671 logins over serial console, the serial port must be added to
672 \path{/etc/securetty}.
674 \newpage
675 To automatically start a login prompt over the serial port,
676 add the line: \begin{quote} {\small {\tt c:2345:respawn:/sbin/mingetty
677 ttyS0}} \end{quote} to \path{/etc/inittab}. Run \path{init q} to force
678 a reload of your inttab and start getty.
680 To enable root logins, add \path{ttyS0} to \path{/etc/securetty} if not
681 already present.
683 Your distribution may use an alternate getty; options include getty,
684 mgetty and agetty. Consult your distribution's documentation
685 for further information.
688 \subsection{TLS Libraries}
690 Users of the XenLinux 2.6 kernel should disable Thread Local Storage
691 (TLS) (e.g.\ by doing a \path{mv /lib/tls /lib/tls.disabled}) before
692 attempting to boot a XenLinux kernel\footnote{If you boot without first
693 disabling TLS, you will get a warning message during the boot process.
694 In this case, simply perform the rename after the machine is up and
695 then run \path{/sbin/ldconfig} to make it take effect.}. You can
696 always reenable TLS by restoring the directory to its original location
697 (i.e.\ \path{mv /lib/tls.disabled /lib/tls}).
699 The reason for this is that the current TLS implementation uses
700 segmentation in a way that is not permissible under Xen. If TLS is not
701 disabled, an emulation mode is used within Xen which reduces performance
702 substantially. To ensure full performance you should install a
703 `Xen-friendly' (nosegneg) version of the library.
706 \section{Booting Xen}
708 It should now be possible to restart the system and use Xen. Reboot and
709 choose the new Xen option when the Grub screen appears.
711 What follows should look much like a conventional Linux boot. The first
712 portion of the output comes from Xen itself, supplying low level
713 information about itself and the underlying hardware. The last portion
714 of the output comes from XenLinux.
716 You may see some error messages during the XenLinux boot. These are not
717 necessarily anything to worry about---they may result from kernel
718 configuration differences between your XenLinux kernel and the one you
719 usually use.
721 When the boot completes, you should be able to log into your system as
722 usual. If you are unable to log in, you should still be able to reboot
723 with your normal Linux kernel by selecting it at the GRUB prompt.
726 % Booting Xen
727 \chapter{Booting a Xen System}
729 Booting the system into Xen will bring you up into the privileged
730 management domain, Domain0. At that point you are ready to create
731 guest domains and ``boot'' them using the \texttt{xm create} command.
733 \section{Booting Domain0}
735 After installation and configuration is complete, reboot the system
736 and and choose the new Xen option when the Grub screen appears.
738 What follows should look much like a conventional Linux boot. The
739 first portion of the output comes from Xen itself, supplying low level
740 information about itself and the underlying hardware. The last
741 portion of the output comes from XenLinux.
743 %% KMSelf Wed Nov 30 18:09:37 PST 2005: We should specify what these are.
745 When the boot completes, you should be able to log into your system as
746 usual. If you are unable to log in, you should still be able to
747 reboot with your normal Linux kernel by selecting it at the GRUB prompt.
749 The first step in creating a new domain is to prepare a root
750 filesystem for it to boot. Typically, this might be stored in a normal
751 partition, an LVM or other volume manager partition, a disk file or on
752 an NFS server. A simple way to do this is simply to boot from your
753 standard OS install CD and install the distribution into another
754 partition on your hard drive.
756 To start the \xend\ control daemon, type
757 \begin{quote}
758 \verb!# xend start!
759 \end{quote}
761 If you wish the daemon to start automatically, see the instructions in
762 Section~\ref{s:xend}. Once the daemon is running, you can use the
763 \path{xm} tool to monitor and maintain the domains running on your
764 system. This chapter provides only a brief tutorial. We provide full
765 details of the \path{xm} tool in the next chapter.
767 % \section{From the web interface}
768 %
769 % Boot the Xen machine and start Xensv (see Chapter~\ref{cha:xensv}
770 % for more details) using the command: \\
771 % \verb_# xensv start_ \\
772 % This will also start Xend (see Chapter~\ref{cha:xend} for more
773 % information).
774 %
775 % The domain management interface will then be available at {\tt
776 % http://your\_machine:8080/}. This provides a user friendly wizard
777 % for starting domains and functions for managing running domains.
778 %
779 % \section{From the command line}
780 \section{Booting Guest Domains}
782 \subsection{Creating a Domain Configuration File}
784 Before you can start an additional domain, you must create a
785 configuration file. We provide two example files which you can use as
786 a starting point:
787 \begin{itemize}
788 \item \path{/etc/xen/xmexample1} is a simple template configuration
789 file for describing a single VM\@.
790 \item \path{/etc/xen/xmexample2} file is a template description that
791 is intended to be reused for multiple virtual machines. Setting the
792 value of the \path{vmid} variable on the \path{xm} command line
793 fills in parts of this template.
794 \end{itemize}
796 There are also a number of other examples which you may find useful.
797 Copy one of these files and edit it as appropriate. Typical values
798 you may wish to edit include:
800 \begin{quote}
801 \begin{description}
802 \item[kernel] Set this to the path of the kernel you compiled for use
803 with Xen (e.g.\ \path{kernel = ``/boot/vmlinuz-2.6-xenU''})
804 \item[memory] Set this to the size of the domain's memory in megabytes
805 (e.g.\ \path{memory = 64})
806 \item[disk] Set the first entry in this list to calculate the offset
807 of the domain's root partition, based on the domain ID\@. Set the
808 second to the location of \path{/usr} if you are sharing it between
809 domains (e.g.\ \path{disk = ['phy:your\_hard\_drive\%d,sda1,w' \%
810 (base\_partition\_number + vmid),
811 'phy:your\_usr\_partition,sda6,r' ]}
812 \item[dhcp] Uncomment the dhcp variable, so that the domain will
813 receive its IP address from a DHCP server (e.g.\ \path{dhcp=``dhcp''})
814 \end{description}
815 \end{quote}
817 You may also want to edit the {\bf vif} variable in order to choose
818 the MAC address of the virtual ethernet interface yourself. For
819 example:
821 \begin{quote}
822 \verb_vif = ['mac=00:16:3E:F6:BB:B3']_
823 \end{quote}
824 If you do not set this variable, \xend\ will automatically generate a
825 random MAC address from the range 00:16:3E:xx:xx:xx, assigned by IEEE to
826 XenSource as an OUI (organizationally unique identifier). XenSource
827 Inc. gives permission for anyone to use addresses randomly allocated
828 from this range for use by their Xen domains.
830 For a list of IEEE OUI assignments, see
831 \url{http://standards.ieee.org/regauth/oui/oui.txt}
834 \subsection{Booting the Guest Domain}
836 The \path{xm} tool provides a variety of commands for managing
837 domains. Use the \path{create} command to start new domains. Assuming
838 you've created a configuration file \path{myvmconf} based around
839 \path{/etc/xen/xmexample2}, to start a domain with virtual machine
840 ID~1 you should type:
842 \begin{quote}
843 \begin{verbatim}
844 # xm create -c myvmconf vmid=1
845 \end{verbatim}
846 \end{quote}
848 The \path{-c} switch causes \path{xm} to turn into the domain's
849 console after creation. The \path{vmid=1} sets the \path{vmid}
850 variable used in the \path{myvmconf} file.
852 You should see the console boot messages from the new domain appearing
853 in the terminal in which you typed the command, culminating in a login
854 prompt.
857 \section{Starting / Stopping Domains Automatically}
859 It is possible to have certain domains start automatically at boot
860 time and to have dom0 wait for all running domains to shutdown before
861 it shuts down the system.
863 To specify a domain is to start at boot-time, place its configuration
864 file (or a link to it) under \path{/etc/xen/auto/}.
866 A Sys-V style init script for Red Hat and LSB-compliant systems is
867 provided and will be automatically copied to \path{/etc/init.d/}
868 during install. You can then enable it in the appropriate way for
869 your distribution.
871 For instance, on Red Hat:
873 \begin{quote}
874 \verb_# chkconfig --add xendomains_
875 \end{quote}
877 By default, this will start the boot-time domains in runlevels 3, 4
878 and 5.
880 You can also use the \path{service} command to run this script
881 manually, e.g:
883 \begin{quote}
884 \verb_# service xendomains start_
886 Starts all the domains with config files under /etc/xen/auto/.
887 \end{quote}
889 \begin{quote}
890 \verb_# service xendomains stop_
892 Shuts down all running Xen domains.
893 \end{quote}
897 \part{Configuration and Management}
899 %% Chapter Domain Management Tools and Daemons
900 \chapter{Domain Management Tools}
902 This chapter summarizes the management software and tools available.
905 \section{\Xend\ }
906 \label{s:xend}
909 The \Xend\ node control daemon performs system management functions
910 related to virtual machines. It forms a central point of control of
911 virtualized resources, and must be running in order to start and manage
912 virtual machines. \Xend\ must be run as root because it needs access to
913 privileged system management functions.
915 An initialization script named \texttt{/etc/init.d/xend} is provided to
916 start \Xend\ at boot time. Use the tool appropriate (i.e. chkconfig) for
917 your Linux distribution to specify the runlevels at which this script
918 should be executed, or manually create symbolic links in the correct
919 runlevel directories.
921 \Xend\ can be started on the command line as well, and supports the
922 following set of parameters:
924 \begin{tabular}{ll}
925 \verb!# xend start! & start \xend, if not already running \\
926 \verb!# xend stop! & stop \xend\ if already running \\
927 \verb!# xend restart! & restart \xend\ if running, otherwise start it \\
928 % \verb!# xend trace_start! & start \xend, with very detailed debug logging \\
929 \verb!# xend status! & indicates \xend\ status by its return code
930 \end{tabular}
932 A SysV init script called {\tt xend} is provided to start \xend\ at
933 boot time. {\tt make install} installs this script in
934 \path{/etc/init.d}. To enable it, you have to make symbolic links in
935 the appropriate runlevel directories or use the {\tt chkconfig} tool,
936 where available. Once \xend\ is running, administration can be done
937 using the \texttt{xm} tool.
939 \subsection{Logging}
941 As \xend\ runs, events will be logged to \path{/var/log/xend.log} and
942 (less frequently) to \path{/var/log/xend-debug.log}. These, along with
943 the standard syslog files, are useful when troubleshooting problems.
945 \subsection{Configuring \Xend\ }
947 \Xend\ is written in Python. At startup, it reads its configuration
948 information from the file \path{/etc/xen/xend-config.sxp}. The Xen
949 installation places an example \texttt{xend-config.sxp} file in the
950 \texttt{/etc/xen} subdirectory which should work for most installations.
952 See the example configuration file \texttt{xend-debug.sxp} and the
953 section 5 man page \texttt{xend-config.sxp} for a full list of
954 parameters and more detailed information. Some of the most important
955 parameters are discussed below.
957 An HTTP interface and a Unix domain socket API are available to
958 communicate with \Xend. This allows remote users to pass commands to the
959 daemon. By default, \Xend does not start an HTTP server. It does start a
960 Unix domain socket management server, as the low level utility
961 \texttt{xm} requires it. For support of cross-machine migration, \Xend\
962 can start a relocation server. This support is not enabled by default
963 for security reasons.
965 Note: the example \texttt{xend} configuration file modifies the defaults and
966 starts up \Xend\ as an HTTP server as well as a relocation server.
968 From the file:
970 \begin{verbatim}
971 #(xend-http-server no)
972 (xend-http-server yes)
973 #(xend-unix-server yes)
974 #(xend-relocation-server no)
975 (xend-relocation-server yes)
976 \end{verbatim}
978 Comment or uncomment lines in that file to disable or enable features
979 that you require.
981 Connections from remote hosts are disabled by default:
983 \begin{verbatim}
984 # Address xend should listen on for HTTP connections, if xend-http-server is
985 # set.
986 # Specifying 'localhost' prevents remote connections.
987 # Specifying the empty string '' (the default) allows all connections.
988 #(xend-address '')
989 (xend-address localhost)
990 \end{verbatim}
992 It is recommended that if migration support is not needed, the
993 \texttt{xend-relocation-server} parameter value be changed to
994 ``\texttt{no}'' or commented out.
996 \section{Xm}
997 \label{s:xm}
999 The xm tool is the primary tool for managing Xen from the console. The
1000 general format of an xm command line is:
1002 \begin{verbatim}
1003 # xm command [switches] [arguments] [variables]
1004 \end{verbatim}
1006 The available \emph{switches} and \emph{arguments} are dependent on the
1007 \emph{command} chosen. The \emph{variables} may be set using
1008 declarations of the form {\tt variable=value} and command line
1009 declarations override any of the values in the configuration file being
1010 used, including the standard variables described above and any custom
1011 variables (for instance, the \path{xmdefconfig} file uses a {\tt vmid}
1012 variable).
1014 For online help for the commands available, type:
1016 \begin{quote}
1017 \begin{verbatim}
1018 # xm help
1019 \end{verbatim}
1020 \end{quote}
1022 This will list the most commonly used commands. The full list can be obtained
1023 using \verb_xm help --long_. You can also type \path{xm help $<$command$>$}
1024 for more information on a given command.
1026 \subsection{Basic Management Commands}
1028 One useful command is \verb_# xm list_ which lists all domains running in rows
1029 of the following format:
1030 \begin{center} {\tt name domid memory vcpus state cputime}
1031 \end{center}
1033 The meaning of each field is as follows:
1034 \begin{quote}
1035 \begin{description}
1036 \item[name] The descriptive name of the virtual machine.
1037 \item[domid] The number of the domain ID this virtual machine is
1038 running in.
1039 \item[memory] Memory size in megabytes.
1040 \item[vcpus] The number of virtual CPUs this domain has.
1041 \item[state] Domain state consists of 5 fields:
1042 \begin{description}
1043 \item[r] running
1044 \item[b] blocked
1045 \item[p] paused
1046 \item[s] shutdown
1047 \item[c] crashed
1048 \end{description}
1049 \item[cputime] How much CPU time (in seconds) the domain has used so
1050 far.
1051 \end{description}
1052 \end{quote}
1054 The \path{xm list} command also supports a long output format when the
1055 \path{-l} switch is used. This outputs the full details of the
1056 running domains in \xend's SXP configuration format.
1059 You can get access to the console of a particular domain using
1060 the \verb_# xm console_ command (e.g.\ \verb_# xm console myVM_).
1064 %% Chapter Domain Configuration
1065 \chapter{Domain Configuration}
1066 \label{cha:config}
1068 The following contains the syntax of the domain configuration files
1069 and description of how to further specify networking, driver domain
1070 and general scheduling behavior.
1073 \section{Configuration Files}
1074 \label{s:cfiles}
1076 Xen configuration files contain the following standard variables.
1077 Unless otherwise stated, configuration items should be enclosed in
1078 quotes: see the configuration scripts in \path{/etc/xen/}
1079 for concrete examples.
1081 \begin{description}
1082 \item[kernel] Path to the kernel image.
1083 \item[ramdisk] Path to a ramdisk image (optional).
1084 % \item[builder] The name of the domain build function (e.g.
1085 % {\tt'linux'} or {\tt'netbsd'}.
1086 \item[memory] Memory size in megabytes.
1087 \item[vcpus] The number of virtual CPUs.
1088 \item[console] Port to export the domain console on (default 9600 +
1089 domain ID).
1090 \item[nics] Number of virtual network interfaces.
1091 \item[vif] List of MAC addresses (random addresses are assigned if not
1092 given) and bridges to use for the domain's network interfaces, e.g.\
1093 \begin{verbatim}
1094 vif = [ 'mac=00:16:3E:00:00:11, bridge=xen-br0',
1095 'bridge=xen-br1' ]
1096 \end{verbatim}
1097 to assign a MAC address and bridge to the first interface and assign
1098 a different bridge to the second interface, leaving \xend\ to choose
1099 the MAC address.
1100 \item[disk] List of block devices to export to the domain e.g.
1101 \verb_disk = [ 'phy:hda1,sda1,r' ]_
1102 exports physical device \path{/dev/hda1} to the domain as
1103 \path{/dev/sda1} with read-only access. Exporting a disk read-write
1104 which is currently mounted is dangerous -- if you are \emph{certain}
1105 you wish to do this, you can specify \path{w!} as the mode.
1106 \item[dhcp] Set to {\tt `dhcp'} if you want to use DHCP to configure
1107 networking.
1108 \item[netmask] Manually configured IP netmask.
1109 \item[gateway] Manually configured IP gateway.
1110 \item[hostname] Set the hostname for the virtual machine.
1111 \item[root] Specify the root device parameter on the kernel command
1112 line.
1113 \item[nfs\_server] IP address for the NFS server (if any).
1114 \item[nfs\_root] Path of the root filesystem on the NFS server (if
1115 any).
1116 \item[extra] Extra string to append to the kernel command line (if
1117 any)
1118 \end{description}
1120 Additional fields are documented in the example configuration files
1121 (e.g. to configure virtual TPM functionality).
1123 For additional flexibility, it is also possible to include Python
1124 scripting commands in configuration files. An example of this is the
1125 \path{xmexample2} file, which uses Python code to handle the
1126 \path{vmid} variable.
1129 %\part{Advanced Topics}
1132 \section{Network Configuration}
1134 For many users, the default installation should work ``out of the
1135 box''. More complicated network setups, for instance with multiple
1136 Ethernet interfaces and/or existing bridging setups will require some
1137 special configuration.
1139 The purpose of this section is to describe the mechanisms provided by
1140 \xend\ to allow a flexible configuration for Xen's virtual networking.
1142 \subsection{Xen virtual network topology}
1144 Each domain network interface is connected to a virtual network
1145 interface in dom0 by a point to point link (effectively a ``virtual
1146 crossover cable''). These devices are named {\tt
1147 vif$<$domid$>$.$<$vifid$>$} (e.g.\ {\tt vif1.0} for the first
1148 interface in domain~1, {\tt vif3.1} for the second interface in
1149 domain~3).
1151 Traffic on these virtual interfaces is handled in domain~0 using
1152 standard Linux mechanisms for bridging, routing, rate limiting, etc.
1153 Xend calls on two shell scripts to perform initial configuration of
1154 the network and configuration of new virtual interfaces. By default,
1155 these scripts configure a single bridge for all the virtual
1156 interfaces. Arbitrary routing / bridging configurations can be
1157 configured by customizing the scripts, as described in the following
1158 section.
1160 \subsection{Xen networking scripts}
1162 Xen's virtual networking is configured by two shell scripts (by
1163 default \path{network-bridge} and \path{vif-bridge}). These are called
1164 automatically by \xend\ when certain events occur, with arguments to
1165 the scripts providing further contextual information. These scripts
1166 are found by default in \path{/etc/xen/scripts}. The names and
1167 locations of the scripts can be configured in
1168 \path{/etc/xen/xend-config.sxp}.
1170 \begin{description}
1171 \item[network-bridge:] This script is called whenever \xend\ is started or
1172 stopped to respectively initialize or tear down the Xen virtual
1173 network. In the default configuration initialization creates the
1174 bridge `xen-br0' and moves eth0 onto that bridge, modifying the
1175 routing accordingly. When \xend\ exits, it deletes the Xen bridge
1176 and removes eth0, restoring the normal IP and routing configuration.
1178 %% In configurations where the bridge already exists, this script
1179 %% could be replaced with a link to \path{/bin/true} (for instance).
1181 \item[vif-bridge:] This script is called for every domain virtual
1182 interface and can configure firewalling rules and add the vif to the
1183 appropriate bridge. By default, this adds and removes VIFs on the
1184 default Xen bridge.
1185 \end{description}
1187 Other example scripts are available (\path{network-route} and
1188 \path{vif-route}, \path{network-nat} and \path{vif-nat}).
1189 For more complex network setups (e.g.\ where routing is required or
1190 integrate with existing bridges) these scripts may be replaced with
1191 customized variants for your site's preferred configuration.
1193 %% There are two possible types of privileges: IO privileges and
1194 %% administration privileges.
1199 % Chapter Storage and FileSytem Management
1200 \chapter{Storage and File System Management}
1202 Storage can be made available to virtual machines in a number of
1203 different ways. This chapter covers some possible configurations.
1205 The most straightforward method is to export a physical block device (a
1206 hard drive or partition) from dom0 directly to the guest domain as a
1207 virtual block device (VBD).
1209 Storage may also be exported from a filesystem image or a partitioned
1210 filesystem image as a \emph{file-backed VBD}.
1212 Finally, standard network storage protocols such as NBD, iSCSI, NFS,
1213 etc., can be used to provide storage to virtual machines.
1216 \section{Exporting Physical Devices as VBDs}
1217 \label{s:exporting-physical-devices-as-vbds}
1219 One of the simplest configurations is to directly export individual
1220 partitions from domain~0 to other domains. To achieve this use the
1221 \path{phy:} specifier in your domain configuration file. For example a
1222 line like
1223 \begin{quote}
1224 \verb_disk = ['phy:hda3,sda1,w']_
1225 \end{quote}
1226 specifies that the partition \path{/dev/hda3} in domain~0 should be
1227 exported read-write to the new domain as \path{/dev/sda1}; one could
1228 equally well export it as \path{/dev/hda} or \path{/dev/sdb5} should
1229 one wish.
1231 In addition to local disks and partitions, it is possible to export
1232 any device that Linux considers to be ``a disk'' in the same manner.
1233 For example, if you have iSCSI disks or GNBD volumes imported into
1234 domain~0 you can export these to other domains using the \path{phy:}
1235 disk syntax. E.g.:
1236 \begin{quote}
1237 \verb_disk = ['phy:vg/lvm1,sda2,w']_
1238 \end{quote}
1240 \begin{center}
1241 \framebox{\bf Warning: Block device sharing}
1242 \end{center}
1243 \begin{quote}
1244 Block devices should typically only be shared between domains in a
1245 read-only fashion otherwise the Linux kernel's file systems will get
1246 very confused as the file system structure may change underneath
1247 them (having the same ext3 partition mounted \path{rw} twice is a
1248 sure fire way to cause irreparable damage)! \Xend\ will attempt to
1249 prevent you from doing this by checking that the device is not
1250 mounted read-write in domain~0, and hasn't already been exported
1251 read-write to another domain. If you want read-write sharing,
1252 export the directory to other domains via NFS from domain~0 (or use
1253 a cluster file system such as GFS or ocfs2).
1254 \end{quote}
1257 \section{Using File-backed VBDs}
1259 It is also possible to use a file in Domain~0 as the primary storage
1260 for a virtual machine. As well as being convenient, this also has the
1261 advantage that the virtual block device will be \emph{sparse} ---
1262 space will only really be allocated as parts of the file are used. So
1263 if a virtual machine uses only half of its disk space then the file
1264 really takes up half of the size allocated.
1266 For example, to create a 2GB sparse file-backed virtual block device
1267 (actually only consumes 1KB of disk):
1268 \begin{quote}
1269 \verb_# dd if=/dev/zero of=vm1disk bs=1k seek=2048k count=1_
1270 \end{quote}
1272 Make a file system in the disk file:
1273 \begin{quote}
1274 \verb_# mkfs -t ext3 vm1disk_
1275 \end{quote}
1277 (when the tool asks for confirmation, answer `y')
1279 Populate the file system e.g.\ by copying from the current root:
1280 \begin{quote}
1281 \begin{verbatim}
1282 # mount -o loop vm1disk /mnt
1283 # cp -ax /{root,dev,var,etc,usr,bin,sbin,lib} /mnt
1284 # mkdir /mnt/{proc,sys,home,tmp}
1285 \end{verbatim}
1286 \end{quote}
1288 Tailor the file system by editing \path{/etc/fstab},
1289 \path{/etc/hostname}, etc.\ Don't forget to edit the files in the
1290 mounted file system, instead of your domain~0 filesystem, e.g.\ you
1291 would edit \path{/mnt/etc/fstab} instead of \path{/etc/fstab}. For
1292 this example put \path{/dev/sda1} to root in fstab.
1294 Now unmount (this is important!):
1295 \begin{quote}
1296 \verb_# umount /mnt_
1297 \end{quote}
1299 In the configuration file set:
1300 \begin{quote}
1301 \verb_disk = ['file:/full/path/to/vm1disk,sda1,w']_
1302 \end{quote}
1304 As the virtual machine writes to its `disk', the sparse file will be
1305 filled in and consume more space up to the original 2GB.
1307 {\bf Note that file-backed VBDs may not be appropriate for backing
1308 I/O-intensive domains.} File-backed VBDs are known to experience
1309 substantial slowdowns under heavy I/O workloads, due to the I/O
1310 handling by the loopback block device used to support file-backed VBDs
1311 in dom0. Better I/O performance can be achieved by using either
1312 LVM-backed VBDs (Section~\ref{s:using-lvm-backed-vbds}) or physical
1313 devices as VBDs (Section~\ref{s:exporting-physical-devices-as-vbds}).
1315 Linux supports a maximum of eight file-backed VBDs across all domains
1316 by default. This limit can be statically increased by using the
1317 \emph{max\_loop} module parameter if CONFIG\_BLK\_DEV\_LOOP is
1318 compiled as a module in the dom0 kernel, or by using the
1319 \emph{max\_loop=n} boot option if CONFIG\_BLK\_DEV\_LOOP is compiled
1320 directly into the dom0 kernel.
1323 \section{Using LVM-backed VBDs}
1324 \label{s:using-lvm-backed-vbds}
1326 A particularly appealing solution is to use LVM volumes as backing for
1327 domain file-systems since this allows dynamic growing/shrinking of
1328 volumes as well as snapshot and other features.
1330 To initialize a partition to support LVM volumes:
1331 \begin{quote}
1332 \begin{verbatim}
1333 # pvcreate /dev/sda10
1334 \end{verbatim}
1335 \end{quote}
1337 Create a volume group named `vg' on the physical partition:
1338 \begin{quote}
1339 \begin{verbatim}
1340 # vgcreate vg /dev/sda10
1341 \end{verbatim}
1342 \end{quote}
1344 Create a logical volume of size 4GB named `myvmdisk1':
1345 \begin{quote}
1346 \begin{verbatim}
1347 # lvcreate -L4096M -n myvmdisk1 vg
1348 \end{verbatim}
1349 \end{quote}
1351 You should now see that you have a \path{/dev/vg/myvmdisk1} Make a
1352 filesystem, mount it and populate it, e.g.:
1353 \begin{quote}
1354 \begin{verbatim}
1355 # mkfs -t ext3 /dev/vg/myvmdisk1
1356 # mount /dev/vg/myvmdisk1 /mnt
1357 # cp -ax / /mnt
1358 # umount /mnt
1359 \end{verbatim}
1360 \end{quote}
1362 Now configure your VM with the following disk configuration:
1363 \begin{quote}
1364 \begin{verbatim}
1365 disk = [ 'phy:vg/myvmdisk1,sda1,w' ]
1366 \end{verbatim}
1367 \end{quote}
1369 LVM enables you to grow the size of logical volumes, but you'll need
1370 to resize the corresponding file system to make use of the new space.
1371 Some file systems (e.g.\ ext3) now support online resize. See the LVM
1372 manuals for more details.
1374 You can also use LVM for creating copy-on-write (CoW) clones of LVM
1375 volumes (known as writable persistent snapshots in LVM terminology).
1376 This facility is new in Linux 2.6.8, so isn't as stable as one might
1377 hope. In particular, using lots of CoW LVM disks consumes a lot of
1378 dom0 memory, and error conditions such as running out of disk space
1379 are not handled well. Hopefully this will improve in future.
1381 To create two copy-on-write clones of the above file system you would
1382 use the following commands:
1384 \begin{quote}
1385 \begin{verbatim}
1386 # lvcreate -s -L1024M -n myclonedisk1 /dev/vg/myvmdisk1
1387 # lvcreate -s -L1024M -n myclonedisk2 /dev/vg/myvmdisk1
1388 \end{verbatim}
1389 \end{quote}
1391 Each of these can grow to have 1GB of differences from the master
1392 volume. You can grow the amount of space for storing the differences
1393 using the lvextend command, e.g.:
1394 \begin{quote}
1395 \begin{verbatim}
1396 # lvextend +100M /dev/vg/myclonedisk1
1397 \end{verbatim}
1398 \end{quote}
1400 Don't let the `differences volume' ever fill up otherwise LVM gets
1401 rather confused. It may be possible to automate the growing process by
1402 using \path{dmsetup wait} to spot the volume getting full and then
1403 issue an \path{lvextend}.
1405 In principle, it is possible to continue writing to the volume that
1406 has been cloned (the changes will not be visible to the clones), but
1407 we wouldn't recommend this: have the cloned volume as a `pristine'
1408 file system install that isn't mounted directly by any of the virtual
1409 machines.
1412 \section{Using NFS Root}
1414 First, populate a root filesystem in a directory on the server
1415 machine. This can be on a distinct physical machine, or simply run
1416 within a virtual machine on the same node.
1418 Now configure the NFS server to export this filesystem over the
1419 network by adding a line to \path{/etc/exports}, for instance:
1421 \begin{quote}
1422 \begin{small}
1423 \begin{verbatim}
1424 /export/vm1root (rw,sync,no_root_squash)
1425 \end{verbatim}
1426 \end{small}
1427 \end{quote}
1429 Finally, configure the domain to use NFS root. In addition to the
1430 normal variables, you should make sure to set the following values in
1431 the domain's configuration file:
1433 \begin{quote}
1434 \begin{small}
1435 \begin{verbatim}
1436 root = '/dev/nfs'
1437 nfs_server = '' # substitute IP address of server
1438 nfs_root = '/path/to/root' # path to root FS on the server
1439 \end{verbatim}
1440 \end{small}
1441 \end{quote}
1443 The domain will need network access at boot time, so either statically
1444 configure an IP address using the config variables \path{ip},
1445 \path{netmask}, \path{gateway}, \path{hostname}; or enable DHCP
1446 (\path{dhcp='dhcp'}).
1448 Note that the Linux NFS root implementation is known to have stability
1449 problems under high load (this is not a Xen-specific problem), so this
1450 configuration may not be appropriate for critical servers.
1453 \chapter{CPU Management}
1455 %% KMS Something sage about CPU / processor management.
1457 Xen allows a domain's virtual CPU(s) to be associated with one or more
1458 host CPUs. This can be used to allocate real resources among one or
1459 more guests, or to make optimal use of processor resources when
1460 utilizing dual-core, hyperthreading, or other advanced CPU technologies.
1462 Xen enumerates physical CPUs in a `depth first' fashion. For a system
1463 with both hyperthreading and multiple cores, this would be all the
1464 hyperthreads on a given core, then all the cores on a given socket,
1465 and then all sockets. I.e. if you had a two socket, dual core,
1466 hyperthreaded Xeon the CPU order would be:
1469 \begin{center}
1470 \begin{tabular}{l|l|l|l|l|l|l|r}
1471 \multicolumn{4}{c|}{socket0} & \multicolumn{4}{c}{socket1} \\ \hline
1472 \multicolumn{2}{c|}{core0} & \multicolumn{2}{c|}{core1} &
1473 \multicolumn{2}{c|}{core0} & \multicolumn{2}{c}{core1} \\ \hline
1474 ht0 & ht1 & ht0 & ht1 & ht0 & ht1 & ht0 & ht1 \\
1475 \#0 & \#1 & \#2 & \#3 & \#4 & \#5 & \#6 & \#7 \\
1476 \end{tabular}
1477 \end{center}
1480 Having multiple vcpus belonging to the same domain mapped to the same
1481 physical CPU is very likely to lead to poor performance. It's better to
1482 use `vcpus-set' to hot-unplug one of the vcpus and ensure the others are
1483 pinned on different CPUs.
1485 If you are running IO intensive tasks, its typically better to dedicate
1486 either a hyperthread or whole core to running domain 0, and hence pin
1487 other domains so that they can't use CPU 0. If your workload is mostly
1488 compute intensive, you may want to pin vcpus such that all physical CPU
1489 threads are available for guest domains.
1491 \chapter{Migrating Domains}
1493 \section{Domain Save and Restore}
1495 The administrator of a Xen system may suspend a virtual machine's
1496 current state into a disk file in domain~0, allowing it to be resumed at
1497 a later time.
1499 For example you can suspend a domain called ``VM1'' to disk using the
1500 command:
1501 \begin{verbatim}
1502 # xm save VM1 VM1.chk
1503 \end{verbatim}
1505 This will stop the domain named ``VM1'' and save its current state
1506 into a file called \path{VM1.chk}.
1508 To resume execution of this domain, use the \path{xm restore} command:
1509 \begin{verbatim}
1510 # xm restore VM1.chk
1511 \end{verbatim}
1513 This will restore the state of the domain and resume its execution.
1514 The domain will carry on as before and the console may be reconnected
1515 using the \path{xm console} command, as described earlier.
1517 \section{Migration and Live Migration}
1519 Migration is used to transfer a domain between physical hosts. There
1520 are two varieties: regular and live migration. The former moves a
1521 virtual machine from one host to another by pausing it, copying its
1522 memory contents, and then resuming it on the destination. The latter
1523 performs the same logical functionality but without needing to pause
1524 the domain for the duration. In general when performing live migration
1525 the domain continues its usual activities and---from the user's
1526 perspective---the migration should be imperceptible.
1528 To perform a live migration, both hosts must be running Xen / \xend\ and
1529 the destination host must have sufficient resources (e.g.\ memory
1530 capacity) to accommodate the domain after the move. Furthermore we
1531 currently require both source and destination machines to be on the same
1532 L2 subnet.
1534 Currently, there is no support for providing automatic remote access
1535 to filesystems stored on local disk when a domain is migrated.
1536 Administrators should choose an appropriate storage solution (i.e.\
1537 SAN, NAS, etc.) to ensure that domain filesystems are also available
1538 on their destination node. GNBD is a good method for exporting a
1539 volume from one machine to another. iSCSI can do a similar job, but is
1540 more complex to set up.
1542 When a domain migrates, it's MAC and IP address move with it, thus it is
1543 only possible to migrate VMs within the same layer-2 network and IP
1544 subnet. If the destination node is on a different subnet, the
1545 administrator would need to manually configure a suitable etherip or IP
1546 tunnel in the domain~0 of the remote node.
1548 A domain may be migrated using the \path{xm migrate} command. To live
1549 migrate a domain to another machine, we would use the command:
1551 \begin{verbatim}
1552 # xm migrate --live mydomain destination.ournetwork.com
1553 \end{verbatim}
1555 Without the \path{--live} flag, \xend\ simply stops the domain and
1556 copies the memory image over to the new node and restarts it. Since
1557 domains can have large allocations this can be quite time consuming,
1558 even on a Gigabit network. With the \path{--live} flag \xend\ attempts
1559 to keep the domain running while the migration is in progress, resulting
1560 in typical down times of just 60--300ms.
1562 For now it will be necessary to reconnect to the domain's console on the
1563 new machine using the \path{xm console} command. If a migrated domain
1564 has any open network connections then they will be preserved, so SSH
1565 connections do not have this limitation.
1568 %% Chapter Securing Xen
1569 \chapter{Securing Xen}
1571 This chapter describes how to secure a Xen system. It describes a number
1572 of scenarios and provides a corresponding set of best practices. It
1573 begins with a section devoted to understanding the security implications
1574 of a Xen system.
1577 \section{Xen Security Considerations}
1579 When deploying a Xen system, one must be sure to secure the management
1580 domain (Domain-0) as much as possible. If the management domain is
1581 compromised, all other domains are also vulnerable. The following are a
1582 set of best practices for Domain-0:
1584 \begin{enumerate}
1585 \item \textbf{Run the smallest number of necessary services.} The less
1586 things that are present in a management partition, the better.
1587 Remember, a service running as root in the management domain has full
1588 access to all other domains on the system.
1589 \item \textbf{Use a firewall to restrict the traffic to the management
1590 domain.} A firewall with default-reject rules will help prevent
1591 attacks on the management domain.
1592 \item \textbf{Do not allow users to access Domain-0.} The Linux kernel
1593 has been known to have local-user root exploits. If you allow normal
1594 users to access Domain-0 (even as unprivileged users) you run the risk
1595 of a kernel exploit making all of your domains vulnerable.
1596 \end{enumerate}
1598 \section{Security Scenarios}
1601 \subsection{The Isolated Management Network}
1603 In this scenario, each node has two network cards in the cluster. One
1604 network card is connected to the outside world and one network card is a
1605 physically isolated management network specifically for Xen instances to
1606 use.
1608 As long as all of the management partitions are trusted equally, this is
1609 the most secure scenario. No additional configuration is needed other
1610 than forcing Xend to bind to the management interface for relocation.
1613 \subsection{A Subnet Behind a Firewall}
1615 In this scenario, each node has only one network card but the entire
1616 cluster sits behind a firewall. This firewall should do at least the
1617 following:
1619 \begin{enumerate}
1620 \item Prevent IP spoofing from outside of the subnet.
1621 \item Prevent access to the relocation port of any of the nodes in the
1622 cluster except from within the cluster.
1623 \end{enumerate}
1625 The following iptables rules can be used on each node to prevent
1626 migrations to that node from outside the subnet assuming the main
1627 firewall does not do this for you:
1629 \begin{verbatim}
1630 # this command disables all access to the Xen relocation
1631 # port:
1632 iptables -A INPUT -p tcp --destination-port 8002 -j REJECT
1634 # this command enables Xen relocations only from the specific
1635 # subnet:
1636 iptables -I INPUT -p tcp -{}-source \
1637 --destination-port 8002 -j ACCEPT
1638 \end{verbatim}
1640 \subsection{Nodes on an Untrusted Subnet}
1642 Migration on an untrusted subnet is not safe in current versions of Xen.
1643 It may be possible to perform migrations through a secure tunnel via an
1644 VPN or SSH. The only safe option in the absence of a secure tunnel is to
1645 disable migration completely. The easiest way to do this is with
1646 iptables:
1648 \begin{verbatim}
1649 # this command disables all access to the Xen relocation port
1650 iptables -A INPUT -p tcp -{}-destination-port 8002 -j REJECT
1651 \end{verbatim}
1653 \part{Reference}
1655 %% Chapter Build and Boot Options
1656 \chapter{Build and Boot Options}
1658 This chapter describes the build- and boot-time options which may be
1659 used to tailor your Xen system.
1661 \section{Top-level Configuration Options}
1663 Top-level configuration is achieved by editing one of two
1664 files: \path{Config.mk} and \path{Makefile}.
1666 The former allows the overall build target architecture to be
1667 specified. You will typically not need to modify this unless
1668 you are cross-compiling or if you wish to build a PAE-enabled
1669 Xen system. Additional configuration options are documented
1670 in the \path{Config.mk} file.
1672 The top-level \path{Makefile} is chiefly used to customize the set of
1673 kernels built. Look for the line:
1674 \begin{quote}
1675 \begin{verbatim}
1676 KERNELS ?= linux-2.6-xen0 linux-2.6-xenU
1677 \end{verbatim}
1678 \end{quote}
1680 Allowable options here are any kernels which have a corresponding
1681 build configuration file in the \path{buildconfigs/} directory.
1685 \section{Xen Build Options}
1687 Xen provides a number of build-time options which should be set as
1688 environment variables or passed on make's command-line.
1690 \begin{description}
1691 \item[verbose=y] Enable debugging messages when Xen detects an
1692 unexpected condition. Also enables console output from all domains.
1693 \item[debug=y] Enable debug assertions. Implies {\bf verbose=y}.
1694 (Primarily useful for tracing bugs in Xen).
1695 \item[debugger=y] Enable the in-Xen debugger. This can be used to
1696 debug Xen, guest OSes, and applications.
1697 \item[perfc=y] Enable performance counters for significant events
1698 within Xen. The counts can be reset or displayed on Xen's console
1699 via console control keys.
1700 \end{description}
1703 \section{Xen Boot Options}
1704 \label{s:xboot}
1706 These options are used to configure Xen's behaviour at runtime. They
1707 should be appended to Xen's command line, either manually or by
1708 editing \path{grub.conf}.
1710 \begin{description}
1711 \item [ noreboot ] Don't reboot the machine automatically on errors.
1712 This is useful to catch debug output if you aren't catching console
1713 messages via the serial line.
1714 \item [ nosmp ] Disable SMP support. This option is implied by
1715 `ignorebiostables'.
1716 \item [ watchdog ] Enable NMI watchdog which can report certain
1717 failures.
1718 \item [ noirqbalance ] Disable software IRQ balancing and affinity.
1719 This can be used on systems such as Dell 1850/2850 that have
1720 workarounds in hardware for IRQ-routing issues.
1721 \item [ badpage=$<$page number$>$,$<$page number$>$, \ldots ] Specify
1722 a list of pages not to be allocated for use because they contain bad
1723 bytes. For example, if your memory tester says that byte 0x12345678
1724 is bad, you would place `badpage=0x12345' on Xen's command line.
1725 \item [ com1=$<$baud$>$,DPS,$<$io\_base$>$,$<$irq$>$
1726 com2=$<$baud$>$,DPS,$<$io\_base$>$,$<$irq$>$ ] \mbox{}\\
1727 Xen supports up to two 16550-compatible serial ports. For example:
1728 `com1=9600, 8n1, 0x408, 5' maps COM1 to a 9600-baud port, 8 data
1729 bits, no parity, 1 stop bit, I/O port base 0x408, IRQ 5. If some
1730 configuration options are standard (e.g., I/O base and IRQ), then
1731 only a prefix of the full configuration string need be specified. If
1732 the baud rate is pre-configured (e.g., by the bootloader) then you
1733 can specify `auto' in place of a numeric baud rate.
1734 \item [ console=$<$specifier list$>$ ] Specify the destination for Xen
1735 console I/O. This is a comma-separated list of, for example:
1736 \begin{description}
1737 \item[ vga ] Use VGA console and allow keyboard input.
1738 \item[ com1 ] Use serial port com1.
1739 \item[ com2H ] Use serial port com2. Transmitted chars will have the
1740 MSB set. Received chars must have MSB set.
1741 \item[ com2L] Use serial port com2. Transmitted chars will have the
1742 MSB cleared. Received chars must have MSB cleared.
1743 \end{description}
1744 The latter two examples allow a single port to be shared by two
1745 subsystems (e.g.\ console and debugger). Sharing is controlled by
1746 MSB of each transmitted/received character. [NB. Default for this
1747 option is `com1,vga']
1748 \item [ sync\_console ] Force synchronous console output. This is
1749 useful if you system fails unexpectedly before it has sent all
1750 available output to the console. In most cases Xen will
1751 automatically enter synchronous mode when an exceptional event
1752 occurs, but this option provides a manual fallback.
1753 \item [ conswitch=$<$switch-char$><$auto-switch-char$>$ ] Specify how
1754 to switch serial-console input between Xen and DOM0. The required
1755 sequence is CTRL-$<$switch-char$>$ pressed three times. Specifying
1756 the backtick character disables switching. The
1757 $<$auto-switch-char$>$ specifies whether Xen should auto-switch
1758 input to DOM0 when it boots --- if it is `x' then auto-switching is
1759 disabled. Any other value, or omitting the character, enables
1760 auto-switching. [NB. Default switch-char is `a'.]
1761 \item [ nmi=xxx ]
1762 Specify what to do with an NMI parity or I/O error. \\
1763 `nmi=fatal': Xen prints a diagnostic and then hangs. \\
1764 `nmi=dom0': Inform DOM0 of the NMI. \\
1765 `nmi=ignore': Ignore the NMI.
1766 \item [ mem=xxx ] Set the physical RAM address limit. Any RAM
1767 appearing beyond this physical address in the memory map will be
1768 ignored. This parameter may be specified with a B, K, M or G suffix,
1769 representing bytes, kilobytes, megabytes and gigabytes respectively.
1770 The default unit, if no suffix is specified, is kilobytes.
1771 \item [ dom0\_mem=xxx ] Set the amount of memory to be allocated to
1772 domain0. In Xen 3.x the parameter may be specified with a B, K, M or
1773 G suffix, representing bytes, kilobytes, megabytes and gigabytes
1774 respectively; if no suffix is specified, the parameter defaults to
1775 kilobytes. In previous versions of Xen, suffixes were not supported
1776 and the value is always interpreted as kilobytes.
1777 \item [ tbuf\_size=xxx ] Set the size of the per-cpu trace buffers, in
1778 pages (default 1). Note that the trace buffers are only enabled in
1779 debug builds. Most users can ignore this feature completely.
1780 \item [ sched=xxx ] Select the CPU scheduler Xen should use. The
1781 current possibilities are `sedf' (default) and `bvt'.
1782 \item [ apic\_verbosity=debug,verbose ] Print more detailed
1783 information about local APIC and IOAPIC configuration.
1784 \item [ lapic ] Force use of local APIC even when left disabled by
1785 uniprocessor BIOS.
1786 \item [ nolapic ] Ignore local APIC in a uniprocessor system, even if
1787 enabled by the BIOS.
1788 \item [ apic=bigsmp,default,es7000,summit ] Specify NUMA platform.
1789 This can usually be probed automatically.
1790 \end{description}
1792 In addition, the following options may be specified on the Xen command
1793 line. Since domain 0 shares responsibility for booting the platform,
1794 Xen will automatically propagate these options to its command line.
1795 These options are taken from Linux's command-line syntax with
1796 unchanged semantics.
1798 \begin{description}
1799 \item [ acpi=off,force,strict,ht,noirq,\ldots ] Modify how Xen (and
1800 domain 0) parses the BIOS ACPI tables.
1801 \item [ acpi\_skip\_timer\_override ] Instruct Xen (and domain~0) to
1802 ignore timer-interrupt override instructions specified by the BIOS
1803 ACPI tables.
1804 \item [ noapic ] Instruct Xen (and domain~0) to ignore any IOAPICs
1805 that are present in the system, and instead continue to use the
1806 legacy PIC.
1807 \end{description}
1810 \section{XenLinux Boot Options}
1812 In addition to the standard Linux kernel boot options, we support:
1813 \begin{description}
1814 \item[ xencons=xxx ] Specify the device node to which the Xen virtual
1815 console driver is attached. The following options are supported:
1816 \begin{center}
1817 \begin{tabular}{l}
1818 `xencons=off': disable virtual console \\
1819 `xencons=tty': attach console to /dev/tty1 (tty0 at boot-time) \\
1820 `xencons=ttyS': attach console to /dev/ttyS0
1821 \end{tabular}
1822 \end{center}
1823 The default is ttyS for dom0 and tty for all other domains.
1824 \end{description}
1827 %% Chapter Further Support
1828 \chapter{Further Support}
1830 If you have questions that are not answered by this manual, the
1831 sources of information listed below may be of interest to you. Note
1832 that bug reports, suggestions and contributions related to the
1833 software (or the documentation) should be sent to the Xen developers'
1834 mailing list (address below).
1837 \section{Other Documentation}
1839 For developers interested in porting operating systems to Xen, the
1840 \emph{Xen Interface Manual} is distributed in the \path{docs/}
1841 directory of the Xen source distribution.
1844 \section{Online References}
1846 The official Xen web site can be found at:
1847 \begin{quote} {\tt http://www.xensource.com}
1848 \end{quote}
1850 This contains links to the latest versions of all online
1851 documentation, including the latest version of the FAQ.
1853 Information regarding Xen is also available at the Xen Wiki at
1854 \begin{quote} {\tt http://wiki.xensource.com/xenwiki/}\end{quote}
1855 The Xen project uses Bugzilla as its bug tracking system. You'll find
1856 the Xen Bugzilla at http://bugzilla.xensource.com/bugzilla/.
1859 \section{Mailing Lists}
1861 There are several mailing lists that are used to discuss Xen related
1862 topics. The most widely relevant are listed below. An official page of
1863 mailing lists and subscription information can be found at \begin{quote}
1864 {\tt http://lists.xensource.com/} \end{quote}
1866 \begin{description}
1867 \item[xen-devel@lists.xensource.com] Used for development
1868 discussions and bug reports. Subscribe at: \\
1869 {\small {\tt http://lists.xensource.com/xen-devel}}
1870 \item[xen-users@lists.xensource.com] Used for installation and usage
1871 discussions and requests for help. Subscribe at: \\
1872 {\small {\tt http://lists.xensource.com/xen-users}}
1873 \item[xen-announce@lists.xensource.com] Used for announcements only.
1874 Subscribe at: \\
1875 {\small {\tt http://lists.xensource.com/xen-announce}}
1876 \item[xen-changelog@lists.xensource.com] Changelog feed
1877 from the unstable and 2.0 trees - developer oriented. Subscribe at: \\
1878 {\small {\tt http://lists.xensource.com/xen-changelog}}
1879 \end{description}
1883 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
1885 \appendix
1887 %% Chapter Glossary of Terms moved to glossary.tex
1888 \chapter{Glossary of Terms}
1890 \begin{description}
1892 \item[BVT] The BVT scheduler is used to give proportional fair shares
1893 of the CPU to domains.
1895 \item[Domain] A domain is the execution context that contains a
1896 running {\bf virtual machine}. The relationship between virtual
1897 machines and domains on Xen is similar to that between programs and
1898 processes in an operating system: a virtual machine is a persistent
1899 entity that resides on disk (somewhat like a program). When it is
1900 loaded for execution, it runs in a domain. Each domain has a {\bf
1901 domain ID}.
1903 \item[Domain 0] The first domain to be started on a Xen machine.
1904 Domain 0 is responsible for managing the system.
1906 \item[Domain ID] A unique identifier for a {\bf domain}, analogous to
1907 a process ID in an operating system.
1909 \item[Full virtualization] An approach to virtualization which
1910 requires no modifications to the hosted operating system, providing
1911 the illusion of a complete system of real hardware devices.
1913 \item[Hypervisor] An alternative term for {\bf VMM}, used because it
1914 means `beyond supervisor', since it is responsible for managing
1915 multiple `supervisor' kernels.
1917 \item[Live migration] A technique for moving a running virtual machine
1918 to another physical host, without stopping it or the services
1919 running on it.
1921 \item[Paravirtualization] An approach to virtualization which requires
1922 modifications to the operating system in order to run in a virtual
1923 machine. Xen uses paravirtualization but preserves binary
1924 compatibility for user space applications.
1926 \item[Shadow pagetables] A technique for hiding the layout of machine
1927 memory from a virtual machine's operating system. Used in some {\bf
1928 VMMs} to provide the illusion of contiguous physical memory, in
1929 Xen this is used during {\bf live migration}.
1931 \item[Virtual Block Device] Persistant storage available to a virtual
1932 machine, providing the abstraction of an actual block storage device.
1933 {\bf VBD}s may be actual block devices, filesystem images, or
1934 remote/network storage.
1936 \item[Virtual Machine] The environment in which a hosted operating
1937 system runs, providing the abstraction of a dedicated machine. A
1938 virtual machine may be identical to the underlying hardware (as in
1939 {\bf full virtualization}, or it may differ, as in {\bf
1940 paravirtualization}).
1942 \item[VMM] Virtual Machine Monitor - the software that allows multiple
1943 virtual machines to be multiplexed on a single physical machine.
1945 \item[Xen] Xen is a paravirtualizing virtual machine monitor,
1946 developed primarily by the Systems Research Group at the University
1947 of Cambridge Computer Laboratory.
1949 \item[XenLinux] A name for the port of the Linux kernel that
1950 runs on Xen.
1952 \end{description}
1955 \end{document}
1958 %% Other stuff without a home
1960 %% Instructions Re Python API
1962 %% Other Control Tasks using Python
1963 %% ================================
1965 %% A Python module 'Xc' is installed as part of the tools-install
1966 %% process. This can be imported, and an 'xc object' instantiated, to
1967 %% provide access to privileged command operations:
1969 %% # import Xc
1970 %% # xc = Xc.new()
1971 %% # dir(xc)
1972 %% # help(xc.domain_create)
1974 %% In this way you can see that the class 'xc' contains useful
1975 %% documentation for you to consult.
1977 %% A further package of useful routines (xenctl) is also installed:
1979 %% # import xenctl.utils
1980 %% # help(xenctl.utils)
1982 %% You can use these modules to write your own custom scripts or you
1983 %% can customise the scripts supplied in the Xen distribution.
1987 % Explain about AGP GART
1990 %% If you're not intending to configure the new domain with an IP
1991 %% address on your LAN, then you'll probably want to use NAT. The
1992 %% 'xen_nat_enable' installs a few useful iptables rules into domain0
1993 %% to enable NAT. [NB: We plan to support RSIP in future]
1997 %% Installing the file systems from the CD
1998 %% =======================================
2000 %% If you haven't got an existing Linux installation onto which you
2001 %% can just drop down the Xen and Xenlinux images, then the file
2002 %% systems on the CD provide a quick way of doing an install. However,
2003 %% you would be better off in the long run doing a proper install of
2004 %% your preferred distro and installing Xen onto that, rather than
2005 %% just doing the hack described below:
2007 %% Choose one or two partitions, depending on whether you want a
2008 %% separate /usr or not. Make file systems on it/them e.g.:
2009 %% mkfs -t ext3 /dev/hda3
2010 %% [or mkfs -t ext2 /dev/hda3 && tune2fs -j /dev/hda3 if using an old
2011 %% version of mkfs]
2013 %% Next, mount the file system(s) e.g.:
2014 %% mkdir /mnt/root && mount /dev/hda3 /mnt/root
2015 %% [mkdir /mnt/usr && mount /dev/hda4 /mnt/usr]
2017 %% To install the root file system, simply untar /usr/XenDemoCD/root.tar.gz:
2018 %% cd /mnt/root && tar -zxpf /usr/XenDemoCD/root.tar.gz
2020 %% You'll need to edit /mnt/root/etc/fstab to reflect your file system
2021 %% configuration. Changing the password file (etc/shadow) is probably a
2022 %% good idea too.
2024 %% To install the usr file system, copy the file system from CD on
2025 %% /usr, though leaving out the "XenDemoCD" and "boot" directories:
2026 %% cd /usr && cp -a X11R6 etc java libexec root src bin dict kerberos
2027 %% local sbin tmp doc include lib man share /mnt/usr
2029 %% If you intend to boot off these file systems (i.e. use them for
2030 %% domain 0), then you probably want to copy the /usr/boot
2031 %% directory on the cd over the top of the current symlink to /boot
2032 %% on your root filesystem (after deleting the current symlink)
2033 %% i.e.:
2034 %% cd /mnt/root ; rm boot ; cp -a /usr/boot .