ia64/xen-unstable

view docs/src/user.tex @ 8741:da6297243495

There is a known "xm console" issue related with VMX. When "serial" is
enabled in script and no once uses "xm console" to read the console,
VMX boting will hang due to the buffer is full.
I added a "select" before "write". If it could not be written,
unix_write will Return immediately and it will not block the VMX
booting. With this fix, we can make VMX's serial enable by default.

Signed-off-by: Yu Ping <ping.y.yu@intel.com>

Modified to patch xmexample.hvm. Put through xenrt on a VMX box.

Signed-off-by: James Bulpin <james@xensource.com>
author jrb44@plym.cl.cam.ac.uk
date Thu Feb 02 19:15:22 2006 +0100 (2006-02-02)
parents f1b361b05bf3
children c63083610678
line source
1 \documentclass[11pt,twoside,final,openright]{report}
2 \usepackage{a4,graphicx,html,parskip,setspace,times,xspace,url}
3 \setstretch{1.15}
5 \renewcommand{\ttdefault}{pcr}
7 \def\Xend{{Xend}\xspace}
8 \def\xend{{xend}\xspace}
10 \latexhtml{\renewcommand{\path}[1]{{\small {\tt #1}}}}{\renewcommand{\path}[1]{{\tt #1}}}
13 \begin{document}
15 % TITLE PAGE
16 \pagestyle{empty}
17 \begin{center}
18 \vspace*{\fill}
19 \includegraphics{figs/xenlogo.eps}
20 \vfill
21 \vfill
22 \vfill
23 \begin{tabular}{l}
24 {\Huge \bf Users' Manual} \\[4mm]
25 {\huge Xen v3.0} \\[80mm]
26 \end{tabular}
27 \end{center}
29 {\bf DISCLAIMER: This documentation is always under active development
30 and as such there may be mistakes and omissions --- watch out for
31 these and please report any you find to the developers' mailing list,
32 xen-devel@lists.xensource.com. The latest version is always available
33 on-line. Contributions of material, suggestions and corrections are
34 welcome.}
36 \vfill
37 \clearpage
40 % COPYRIGHT NOTICE
41 \pagestyle{empty}
43 \vspace*{\fill}
45 Xen is Copyright \copyright 2002-2005, University of Cambridge, UK, XenSource
46 Inc., IBM Corp., Hewlett-Packard Co., Intel Corp., AMD Inc., and others. All
47 rights reserved.
49 Xen is an open-source project. Most portions of Xen are licensed for copying
50 under the terms of the GNU General Public License, version 2. Other portions
51 are licensed under the terms of the GNU Lesser General Public License, the
52 Zope Public License 2.0, or under ``BSD-style'' licenses. Please refer to the
53 COPYING file for details.
55 \cleardoublepage
58 % TABLE OF CONTENTS
59 \pagestyle{plain}
60 \pagenumbering{roman}
61 { \parskip 0pt plus 1pt
62 \tableofcontents }
63 \cleardoublepage
66 % PREPARE FOR MAIN TEXT
67 \pagenumbering{arabic}
68 \raggedbottom
69 \widowpenalty=10000
70 \clubpenalty=10000
71 \parindent=0pt
72 \parskip=5pt
73 \renewcommand{\topfraction}{.8}
74 \renewcommand{\bottomfraction}{.8}
75 \renewcommand{\textfraction}{.2}
76 \renewcommand{\floatpagefraction}{.8}
77 \setstretch{1.1}
80 %% Chapter Introduction moved to introduction.tex
81 \chapter{Introduction}
84 Xen is an open-source \emph{para-virtualizing} virtual machine monitor
85 (VMM), or ``hypervisor'', for the x86 processor architecture. Xen can
86 securely execute multiple virtual machines on a single physical system
87 with close-to-native performance. Xen facilitates enterprise-grade
88 functionality, including:
90 \begin{itemize}
91 \item Virtual machines with performance close to native hardware.
92 \item Live migration of running virtual machines between physical hosts.
93 \item Up to 32 virtual CPUs per guest virtual machine, with VCPU hotplug.
94 \item x86/32, x86/32 with PAE, and x86/64 platform support.
95 \item Intel Virtualization Technology (VT-x) for unmodified guest operating systems (including Microsoft Windows).
96 \item Excellent hardware support (supports almost all Linux device
97 drivers).
98 \end{itemize}
101 \section{Usage Scenarios}
103 Usage scenarios for Xen include:
105 \begin{description}
106 \item [Server Consolidation.] Move multiple servers onto a single
107 physical host with performance and fault isolation provided at the
108 virtual machine boundaries.
109 \item [Hardware Independence.] Allow legacy applications and operating
110 systems to exploit new hardware.
111 \item [Multiple OS configurations.] Run multiple operating systems
112 simultaneously, for development or testing purposes.
113 \item [Kernel Development.] Test and debug kernel modifications in a
114 sand-boxed virtual machine --- no need for a separate test machine.
115 \item [Cluster Computing.] Management at VM granularity provides more
116 flexibility than separately managing each physical host, but better
117 control and isolation than single-system image solutions,
118 particularly by using live migration for load balancing.
119 \item [Hardware support for custom OSes.] Allow development of new
120 OSes while benefiting from the wide-ranging hardware support of
121 existing OSes such as Linux.
122 \end{description}
125 \section{Operating System Support}
127 Para-virtualization permits very high performance virtualization, even
128 on architectures like x86 that are traditionally very hard to
129 virtualize.
131 This approach requires operating systems to be \emph{ported} to run on
132 Xen. Porting an OS to run on Xen is similar to supporting a new
133 hardware platform, however the process is simplified because the
134 para-virtual machine architecture is very similar to the underlying
135 native hardware. Even though operating system kernels must explicitly
136 support Xen, a key feature is that user space applications and
137 libraries \emph{do not} require modification.
139 With hardware CPU virtualization as provided by Intel VT and AMD
140 SVM technology, the ability to run an unmodified guest OS kernel
141 is available. No porting of the OS is required, although some
142 additional driver support is necessary within Xen itself. Unlike
143 traditional full virtualization hypervisors, which suffer a tremendous
144 performance overhead, the combination of Xen and VT or Xen and
145 Pacifica technology complement one another to offer superb performance
146 for para-virtualized guest operating systems and full support for
147 unmodified guests running natively on the processor. Full support for
148 VT and Pacifica chipsets will appear in early 2006.
150 Paravirtualized Xen support is available for increasingly many
151 operating systems: currently, mature Linux support is available and
152 included in the standard distribution. Other OS ports---including
153 NetBSD, FreeBSD and Solaris x86 v10---are nearing completion.
156 \section{Hardware Support}
158 Xen currently runs on the x86 architecture, requiring a ``P6'' or
159 newer processor (e.g.\ Pentium Pro, Celeron, Pentium~II, Pentium~III,
160 Pentium~IV, Xeon, AMD~Athlon, AMD~Duron). Multiprocessor machines are
161 supported, and there is support for HyperThreading (SMT). In
162 addition, ports to IA64 and Power architectures are in progress.
164 The default 32-bit Xen supports up to 4GB of memory. However Xen 3.0
165 adds support for Intel's Physical Addressing Extensions (PAE), which
166 enable x86/32 machines to address up to 64 GB of physical memory. Xen
167 3.0 also supports x86/64 platforms such as Intel EM64T and AMD Opteron
168 which can currently address up to 1TB of physical memory.
170 Xen offloads most of the hardware support issues to the guest OS
171 running in the \emph{Domain~0} management virtual machine. Xen itself
172 contains only the code required to detect and start secondary
173 processors, set up interrupt routing, and perform PCI bus
174 enumeration. Device drivers run within a privileged guest OS rather
175 than within Xen itself. This approach provides compatibility with the
176 majority of device hardware supported by Linux. The default XenLinux
177 build contains support for most server-class network and disk
178 hardware, but you can add support for other hardware by configuring
179 your XenLinux kernel in the normal way.
182 \section{Structure of a Xen-Based System}
184 A Xen system has multiple layers, the lowest and most privileged of
185 which is Xen itself.
187 Xen may host multiple \emph{guest} operating systems, each of which is
188 executed within a secure virtual machine. In Xen terminology, a
189 \emph{domain}. Domains are scheduled by Xen to make effective use of the
190 available physical CPUs. Each guest OS manages its own applications.
191 This management includes the responsibility of scheduling each
192 application within the time allotted to the VM by Xen.
194 The first domain, \emph{domain~0}, is created automatically when the
195 system boots and has special management privileges. Domain~0 builds
196 other domains and manages their virtual devices. It also performs
197 administrative tasks such as suspending, resuming and migrating other
198 virtual machines.
200 Within domain~0, a process called \emph{xend} runs to manage the system.
201 \Xend\ is responsible for managing virtual machines and providing access
202 to their consoles. Commands are issued to \xend\ over an HTTP interface,
203 via a command-line tool.
206 \section{History}
208 Xen was originally developed by the Systems Research Group at the
209 University of Cambridge Computer Laboratory as part of the XenoServers
210 project, funded by the UK-EPSRC\@.
212 XenoServers aim to provide a ``public infrastructure for global
213 distributed computing''. Xen plays a key part in that, allowing one to
214 efficiently partition a single machine to enable multiple independent
215 clients to run their operating systems and applications in an
216 environment. This environment provides protection, resource isolation
217 and accounting. The project web page contains further information along
218 with pointers to papers and technical reports:
219 \path{http://www.cl.cam.ac.uk/xeno}
221 Xen has grown into a fully-fledged project in its own right, enabling us
222 to investigate interesting research issues regarding the best techniques
223 for virtualizing resources such as the CPU, memory, disk and network.
224 Project contributors now include XenSource, Intel, IBM, HP, AMD, Novell,
225 RedHat.
227 Xen was first described in a paper presented at SOSP in
228 2003\footnote{\tt
229 http://www.cl.cam.ac.uk/netos/papers/2003-xensosp.pdf}, and the first
230 public release (1.0) was made that October. Since then, Xen has
231 significantly matured and is now used in production scenarios on many
232 sites.
234 \section{What's New}
236 Xen 3.0.0 offers:
238 \begin{itemize}
239 \item Support for up to 32-way SMP guest operating systems
240 \item Intel (Physical Addressing Extensions) PAE to support 32-bit
241 servers with more than 4GB physical memory
242 \item x86/64 support (Intel EM64T, AMD Opteron)
243 \item Intel VT-x support to enable the running of unmodified guest
244 operating systems (Windows XP/2003, Legacy Linux)
245 \item Enhanced control tools
246 \item Improved ACPI support
247 \item AGP/DRM graphics
248 \end{itemize}
251 Xen 3.0 features greatly enhanced hardware support, configuration
252 flexibility, usability and a larger complement of supported operating
253 systems. This latest release takes Xen a step closer to being the
254 definitive open source solution for virtualization.
258 \part{Installation}
260 %% Chapter Basic Installation
261 \chapter{Basic Installation}
263 The Xen distribution includes three main components: Xen itself, ports
264 of Linux and NetBSD to run on Xen, and the userspace tools required to
265 manage a Xen-based system. This chapter describes how to install the
266 Xen~3.0 distribution from source. Alternatively, there may be pre-built
267 packages available as part of your operating system distribution.
270 \section{Prerequisites}
271 \label{sec:prerequisites}
273 The following is a full list of prerequisites. Items marked `$\dag$' are
274 required by the \xend\ control tools, and hence required if you want to
275 run more than one virtual machine; items marked `$*$' are only required
276 if you wish to build from source.
277 \begin{itemize}
278 \item A working Linux distribution using the GRUB bootloader and running
279 on a P6-class or newer CPU\@.
280 \item [$\dag$] The \path{iproute2} package.
281 \item [$\dag$] The Linux bridge-utils\footnote{Available from {\tt
282 http://bridge.sourceforge.net}} (e.g., \path{/sbin/brctl})
283 \item [$\dag$] The Linux hotplug system\footnote{Available from {\tt
284 http://linux-hotplug.sourceforge.net/}} (e.g.,
285 \path{/sbin/hotplug} and related scripts). On newer distributions,
286 this is included alongside the Linux udev system\footnote{See {\tt
287 http://www.kernel.org/pub/linux/utils/kernel/hotplug/udev.html/}}.
288 \item [$*$] Build tools (gcc v3.2.x or v3.3.x, binutils, GNU make).
289 \item [$*$] Development installation of zlib (e.g.,\ zlib-dev).
290 \item [$*$] Development installation of Python v2.2 or later (e.g.,\
291 python-dev).
292 \item [$*$] \LaTeX\ and transfig are required to build the
293 documentation.
294 \end{itemize}
296 Once you have satisfied these prerequisites, you can now install either
297 a binary or source distribution of Xen.
299 \section{Installing from Binary Tarball}
301 Pre-built tarballs are available for download from the XenSource downloads
302 page:
303 \begin{quote} {\tt http://www.xensource.com/downloads/}
304 \end{quote}
306 Once you've downloaded the tarball, simply unpack and install:
307 \begin{verbatim}
308 # tar zxvf xen-3.0-install.tgz
309 # cd xen-3.0-install
310 # sh ./install.sh
311 \end{verbatim}
313 Once you've installed the binaries you need to configure your system as
314 described in Section~\ref{s:configure}.
316 \section{Installing from RPMs}
317 Pre-built RPMs are available for download from the XenSource downloads
318 page:
319 \begin{quote} {\tt http://www.xensource.com/downloads/}
320 \end{quote}
322 Once you've downloaded the RPMs, you typically install them via the
323 RPM commands:
325 \verb|# rpm -iv rpmname|
327 See the instructions and the Release Notes for each RPM set referenced at:
328 \begin{quote}
329 {\tt http://www.xensource.com/downloads/}.
330 \end{quote}
332 \section{Installing from Source}
334 This section describes how to obtain, build and install Xen from source.
336 \subsection{Obtaining the Source}
338 The Xen source tree is available as either a compressed source tarball
339 or as a clone of our master Mercurial repository.
341 \begin{description}
342 \item[Obtaining the Source Tarball]\mbox{} \\
343 Stable versions and daily snapshots of the Xen source tree are
344 available from the Xen download page:
345 \begin{quote} {\tt \tt http://www.xensource.com/downloads/}
346 \end{quote}
347 \item[Obtaining the source via Mercurial]\mbox{} \\
348 The source tree may also be obtained via the public Mercurial
349 repository at:
350 \begin{quote}{\tt http://xenbits.xensource.com}
351 \end{quote} See the instructions and the Getting Started Guide
352 referenced at:
353 \begin{quote}
354 {\tt http://www.xensource.com/downloads/}
355 \end{quote}
356 \end{description}
358 % \section{The distribution}
359 %
360 % The Xen source code repository is structured as follows:
361 %
362 % \begin{description}
363 % \item[\path{tools/}] Xen node controller daemon (Xend), command line
364 % tools, control libraries
365 % \item[\path{xen/}] The Xen VMM.
366 % \item[\path{buildconfigs/}] Build configuration files
367 % \item[\path{linux-*-xen-sparse/}] Xen support for Linux.
368 % \item[\path{patches/}] Experimental patches for Linux.
369 % \item[\path{docs/}] Various documentation files for users and
370 % developers.
371 % \item[\path{extras/}] Bonus extras.
372 % \end{description}
374 \subsection{Building from Source}
376 The top-level Xen Makefile includes a target ``world'' that will do the
377 following:
379 \begin{itemize}
380 \item Build Xen.
381 \item Build the control tools, including \xend.
382 \item Download (if necessary) and unpack the Linux 2.6 source code, and
383 patch it for use with Xen.
384 \item Build a Linux kernel to use in domain~0 and a smaller unprivileged
385 kernel, which can be used for unprivileged virtual machines.
386 \end{itemize}
388 After the build has completed you should have a top-level directory
389 called \path{dist/} in which all resulting targets will be placed. Of
390 particular interest are the two XenLinux kernel images, one with a
391 ``-xen0'' extension which contains hardware device drivers and drivers
392 for Xen's virtual devices, and one with a ``-xenU'' extension that
393 just contains the virtual ones. These are found in
394 \path{dist/install/boot/} along with the image for Xen itself and the
395 configuration files used during the build.
397 %The NetBSD port can be built using:
398 %\begin{quote}
399 %\begin{verbatim}
400 %# make netbsd20
401 %\end{verbatim}\end{quote}
402 %NetBSD port is built using a snapshot of the netbsd-2-0 cvs branch.
403 %The snapshot is downloaded as part of the build process if it is not
404 %yet present in the \path{NETBSD\_SRC\_PATH} search path. The build
405 %process also downloads a toolchain which includes all of the tools
406 %necessary to build the NetBSD kernel under Linux.
408 To customize the set of kernels built you need to edit the top-level
409 Makefile. Look for the line:
410 \begin{quote}
411 \begin{verbatim}
412 KERNELS ?= linux-2.6-xen0 linux-2.6-xenU
413 \end{verbatim}
414 \end{quote}
416 You can edit this line to include any set of operating system kernels
417 which have configurations in the top-level \path{buildconfigs/}
418 directory.
420 %% Inspect the Makefile if you want to see what goes on during a
421 %% build. Building Xen and the tools is straightforward, but XenLinux
422 %% is more complicated. The makefile needs a `pristine' Linux kernel
423 %% tree to which it will then add the Xen architecture files. You can
424 %% tell the makefile the location of the appropriate Linux compressed
425 %% tar file by
426 %% setting the LINUX\_SRC environment variable, e.g. \\
427 %% \verb!# LINUX_SRC=/tmp/linux-2.6.11.tar.bz2 make world! \\ or by
428 %% placing the tar file somewhere in the search path of {\tt
429 %% LINUX\_SRC\_PATH} which defaults to `{\tt .:..}'. If the
430 %% makefile can't find a suitable kernel tar file it attempts to
431 %% download it from kernel.org (this won't work if you're behind a
432 %% firewall).
434 %% After untaring the pristine kernel tree, the makefile uses the {\tt
435 %% mkbuildtree} script to add the Xen patches to the kernel.
437 %% \framebox{\parbox{5in}{
438 %% {\bf Distro specific:} \\
439 %% {\it Gentoo} --- if not using udev (most installations,
440 %% currently), you'll need to enable devfs and devfs mount at boot
441 %% time in the xen0 config. }}
443 \subsection{Custom Kernels}
445 % If you have an SMP machine you may wish to give the {\tt '-j4'}
446 % argument to make to get a parallel build.
448 If you wish to build a customized XenLinux kernel (e.g.\ to support
449 additional devices or enable distribution-required features), you can
450 use the standard Linux configuration mechanisms, specifying that the
451 architecture being built for is \path{xen}, e.g:
452 \begin{quote}
453 \begin{verbatim}
454 # cd linux-2.6.12-xen0
455 # make ARCH=xen xconfig
456 # cd ..
457 # make
458 \end{verbatim}
459 \end{quote}
461 You can also copy an existing Linux configuration (\path{.config}) into
462 e.g.\ \path{linux-2.6.12-xen0} and execute:
463 \begin{quote}
464 \begin{verbatim}
465 # make ARCH=xen oldconfig
466 \end{verbatim}
467 \end{quote}
469 You may be prompted with some Xen-specific options. We advise accepting
470 the defaults for these options.
472 Note that the only difference between the two types of Linux kernels
473 that are built is the configuration file used for each. The ``U''
474 suffixed (unprivileged) versions don't contain any of the physical
475 hardware device drivers, leading to a 30\% reduction in size; hence you
476 may prefer these for your non-privileged domains. The ``0'' suffixed
477 privileged versions can be used to boot the system, as well as in driver
478 domains and unprivileged domains.
480 \subsection{Installing Generated Binaries}
482 The files produced by the build process are stored under the
483 \path{dist/install/} directory. To install them in their default
484 locations, do:
485 \begin{quote}
486 \begin{verbatim}
487 # make install
488 \end{verbatim}
489 \end{quote}
491 Alternatively, users with special installation requirements may wish to
492 install them manually by copying the files to their appropriate
493 destinations.
495 %% Files in \path{install/boot/} include:
496 %% \begin{itemize}
497 %% \item \path{install/boot/xen-3.0.gz} Link to the Xen 'kernel'
498 %% \item \path{install/boot/vmlinuz-2.6-xen0} Link to domain 0
499 %% XenLinux kernel
500 %% \item \path{install/boot/vmlinuz-2.6-xenU} Link to unprivileged
501 %% XenLinux kernel
502 %% \end{itemize}
504 The \path{dist/install/boot} directory will also contain the config
505 files used for building the XenLinux kernels, and also versions of Xen
506 and XenLinux kernels that contain debug symbols such as
507 (\path{xen-syms-3.0.0} and \path{vmlinux-syms-2.6.12.6-xen0}) which are
508 essential for interpreting crash dumps. Retain these files as the
509 developers may wish to see them if you post on the mailing list.
512 \section{Configuration}
513 \label{s:configure}
515 Once you have built and installed the Xen distribution, it is simple to
516 prepare the machine for booting and running Xen.
518 \subsection{GRUB Configuration}
520 An entry should be added to \path{grub.conf} (often found under
521 \path{/boot/} or \path{/boot/grub/}) to allow Xen / XenLinux to boot.
522 This file is sometimes called \path{menu.lst}, depending on your
523 distribution. The entry should look something like the following:
525 %% KMSelf Thu Dec 1 19:06:13 PST 2005 262144 is useful for RHEL/RH and
526 %% related Dom0s.
527 {\small
528 \begin{verbatim}
529 title Xen 3.0 / XenLinux 2.6
530 kernel /boot/xen-3.0.gz dom0_mem=262144
531 module /boot/vmlinuz-2.6-xen0 root=/dev/sda4 ro console=tty0
532 \end{verbatim}
533 }
535 The kernel line tells GRUB where to find Xen itself and what boot
536 parameters should be passed to it (in this case, setting the domain~0
537 memory allocation in kilobytes and the settings for the serial port).
538 For more details on the various Xen boot parameters see
539 Section~\ref{s:xboot}.
541 The module line of the configuration describes the location of the
542 XenLinux kernel that Xen should start and the parameters that should be
543 passed to it. These are standard Linux parameters, identifying the root
544 device and specifying it be initially mounted read only and instructing
545 that console output be sent to the screen. Some distributions such as
546 SuSE do not require the \path{ro} parameter.
548 %% \framebox{\parbox{5in}{
549 %% {\bf Distro specific:} \\
550 %% {\it SuSE} --- Omit the {\tt ro} option from the XenLinux
551 %% kernel command line, since the partition won't be remounted rw
552 %% during boot. }}
554 To use an initrd, add another \path{module} line to the configuration,
555 like: {\small
556 \begin{verbatim}
557 module /boot/my_initrd.gz
558 \end{verbatim}
559 }
561 %% KMSelf Thu Dec 1 19:05:30 PST 2005 Other configs as an appendix?
563 When installing a new kernel, it is recommended that you do not delete
564 existing menu options from \path{menu.lst}, as you may wish to boot your
565 old Linux kernel in future, particularly if you have problems.
567 \subsection{Serial Console (optional)}
569 Serial console access allows you to manage, monitor, and interact with
570 your system over a serial console. This can allow access from another
571 nearby system via a null-modem (``LapLink'') cable or remotely via a serial
572 concentrator.
574 You system's BIOS, bootloader (GRUB), Xen, Linux, and login access must
575 each be individually configured for serial console access. It is
576 \emph{not} strictly necessary to have each component fully functional,
577 but it can be quite useful.
579 For general information on serial console configuration under Linux,
580 refer to the ``Remote Serial Console HOWTO'' at The Linux Documentation
581 Project: \url{http://www.tldp.org}
583 \subsubsection{Serial Console BIOS configuration}
585 Enabling system serial console output neither enables nor disables
586 serial capabilities in GRUB, Xen, or Linux, but may make remote
587 management of your system more convenient by displaying POST and other
588 boot messages over serial port and allowing remote BIOS configuration.
590 Refer to your hardware vendor's documentation for capabilities and
591 procedures to enable BIOS serial redirection.
594 \subsubsection{Serial Console GRUB configuration}
596 Enabling GRUB serial console output neither enables nor disables Xen or
597 Linux serial capabilities, but may made remote management of your system
598 more convenient by displaying GRUB prompts, menus, and actions over
599 serial port and allowing remote GRUB management.
601 Adding the following two lines to your GRUB configuration file,
602 typically either \path{/boot/grub/menu.lst} or \path{/boot/grub/grub.conf}
603 depending on your distro, will enable GRUB serial output.
605 \begin{quote}
606 {\small \begin{verbatim}
607 serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1
608 terminal --timeout=10 serial console
609 \end{verbatim}}
610 \end{quote}
612 Note that when both the serial port and the local monitor and keyboard
613 are enabled, the text ``\emph{Press any key to continue}'' will appear
614 at both. Pressing a key on one device will cause GRUB to display to
615 that device. The other device will see no output. If no key is
616 pressed before the timeout period expires, the system will boot to the
617 default GRUB boot entry.
619 Please refer to the GRUB documentation for further information.
622 \subsubsection{Serial Console Xen configuration}
624 Enabling Xen serial console output neither enables nor disables Linux
625 kernel output or logging in to Linux over serial port. It does however
626 allow you to monitor and log the Xen boot process via serial console and
627 can be very useful in debugging.
629 %% kernel /boot/xen-2.0.gz dom0_mem=131072 com1=115200,8n1
630 %% module /boot/vmlinuz-2.6-xen0 root=/dev/sda4 ro
632 In order to configure Xen serial console output, it is necessary to
633 add a boot option to your GRUB config; e.g.\ replace the previous
634 example kernel line with:
635 \begin{quote} {\small \begin{verbatim}
636 kernel /boot/xen.gz dom0_mem=131072 com1=115200,8n1
637 \end{verbatim}}
638 \end{quote}
640 This configures Xen to output on COM1 at 115,200 baud, 8 data bits, 1
641 stop bit and no parity. Modify these parameters for your environment.
643 One can also configure XenLinux to share the serial console; to achieve
644 this append ``\path{console=ttyS0}'' to your module line.
647 \subsubsection{Serial Console Linux configuration}
649 Enabling Linux serial console output at boot neither enables nor
650 disables logging in to Linux over serial port. It does however allow
651 you to monitor and log the Linux boot process via serial console and can be
652 very useful in debugging.
654 To enable Linux output at boot time, add the parameter
655 \path{console=ttyS0} (or ttyS1, ttyS2, etc.) to your kernel GRUB line.
656 Under Xen, this might be:
657 \begin{quote}
658 {\footnotesize \begin{verbatim}
659 module /vmlinuz-2.6-xen0 ro root=/dev/VolGroup00/LogVol00 \
660 console=ttyS0, 115200
661 \end{verbatim}}
662 \end{quote}
663 to enable output over ttyS0 at 115200 baud.
667 \subsubsection{Serial Console Login configuration}
669 Logging in to Linux via serial console, under Xen or otherwise, requires
670 specifying a login prompt be started on the serial port. To permit root
671 logins over serial console, the serial port must be added to
672 \path{/etc/securetty}.
674 \newpage
675 To automatically start a login prompt over the serial port,
676 add the line: \begin{quote} {\small {\tt c:2345:respawn:/sbin/mingetty
677 ttyS0}} \end{quote} to \path{/etc/inittab}. Run \path{init q} to force
678 a reload of your inttab and start getty.
680 To enable root logins, add \path{ttyS0} to \path{/etc/securetty} if not
681 already present.
683 Your distribution may use an alternate getty; options include getty,
684 mgetty and agetty. Consult your distribution's documentation
685 for further information.
688 \subsection{TLS Libraries}
690 Users of the XenLinux 2.6 kernel should disable Thread Local Storage
691 (TLS) (e.g.\ by doing a \path{mv /lib/tls /lib/tls.disabled}) before
692 attempting to boot a XenLinux kernel\footnote{If you boot without first
693 disabling TLS, you will get a warning message during the boot process.
694 In this case, simply perform the rename after the machine is up and
695 then run \path{/sbin/ldconfig} to make it take effect.}. You can
696 always reenable TLS by restoring the directory to its original location
697 (i.e.\ \path{mv /lib/tls.disabled /lib/tls}).
699 The reason for this is that the current TLS implementation uses
700 segmentation in a way that is not permissible under Xen. If TLS is not
701 disabled, an emulation mode is used within Xen which reduces performance
702 substantially. To ensure full performance you should install a
703 `Xen-friendly' (nosegneg) version of the library.
706 \section{Booting Xen}
708 It should now be possible to restart the system and use Xen. Reboot and
709 choose the new Xen option when the Grub screen appears.
711 What follows should look much like a conventional Linux boot. The first
712 portion of the output comes from Xen itself, supplying low level
713 information about itself and the underlying hardware. The last portion
714 of the output comes from XenLinux.
716 You may see some error messages during the XenLinux boot. These are not
717 necessarily anything to worry about---they may result from kernel
718 configuration differences between your XenLinux kernel and the one you
719 usually use.
721 When the boot completes, you should be able to log into your system as
722 usual. If you are unable to log in, you should still be able to reboot
723 with your normal Linux kernel by selecting it at the GRUB prompt.
726 % Booting Xen
727 \chapter{Booting a Xen System}
729 Booting the system into Xen will bring you up into the privileged
730 management domain, Domain0. At that point you are ready to create
731 guest domains and ``boot'' them using the \texttt{xm create} command.
733 \section{Booting Domain0}
735 After installation and configuration is complete, reboot the system
736 and and choose the new Xen option when the Grub screen appears.
738 What follows should look much like a conventional Linux boot. The
739 first portion of the output comes from Xen itself, supplying low level
740 information about itself and the underlying hardware. The last
741 portion of the output comes from XenLinux.
743 %% KMSelf Wed Nov 30 18:09:37 PST 2005: We should specify what these are.
745 When the boot completes, you should be able to log into your system as
746 usual. If you are unable to log in, you should still be able to
747 reboot with your normal Linux kernel by selecting it at the GRUB prompt.
749 The first step in creating a new domain is to prepare a root
750 filesystem for it to boot. Typically, this might be stored in a normal
751 partition, an LVM or other volume manager partition, a disk file or on
752 an NFS server. A simple way to do this is simply to boot from your
753 standard OS install CD and install the distribution into another
754 partition on your hard drive.
756 To start the \xend\ control daemon, type
757 \begin{quote}
758 \verb!# xend start!
759 \end{quote}
761 If you wish the daemon to start automatically, see the instructions in
762 Section~\ref{s:xend}. Once the daemon is running, you can use the
763 \path{xm} tool to monitor and maintain the domains running on your
764 system. This chapter provides only a brief tutorial. We provide full
765 details of the \path{xm} tool in the next chapter.
767 % \section{From the web interface}
768 %
769 % Boot the Xen machine and start Xensv (see Chapter~\ref{cha:xensv}
770 % for more details) using the command: \\
771 % \verb_# xensv start_ \\
772 % This will also start Xend (see Chapter~\ref{cha:xend} for more
773 % information).
774 %
775 % The domain management interface will then be available at {\tt
776 % http://your\_machine:8080/}. This provides a user friendly wizard
777 % for starting domains and functions for managing running domains.
778 %
779 % \section{From the command line}
780 \section{Booting Guest Domains}
782 \subsection{Creating a Domain Configuration File}
784 Before you can start an additional domain, you must create a
785 configuration file. We provide two example files which you can use as
786 a starting point:
787 \begin{itemize}
788 \item \path{/etc/xen/xmexample1} is a simple template configuration
789 file for describing a single VM\@.
790 \item \path{/etc/xen/xmexample2} file is a template description that
791 is intended to be reused for multiple virtual machines. Setting the
792 value of the \path{vmid} variable on the \path{xm} command line
793 fills in parts of this template.
794 \end{itemize}
796 There are also a number of other examples which you may find useful.
797 Copy one of these files and edit it as appropriate. Typical values
798 you may wish to edit include:
800 \begin{quote}
801 \begin{description}
802 \item[kernel] Set this to the path of the kernel you compiled for use
803 with Xen (e.g.\ \path{kernel = ``/boot/vmlinuz-2.6-xenU''})
804 \item[memory] Set this to the size of the domain's memory in megabytes
805 (e.g.\ \path{memory = 64})
806 \item[disk] Set the first entry in this list to calculate the offset
807 of the domain's root partition, based on the domain ID\@. Set the
808 second to the location of \path{/usr} if you are sharing it between
809 domains (e.g.\ \path{disk = ['phy:your\_hard\_drive\%d,sda1,w' \%
810 (base\_partition\_number + vmid),
811 'phy:your\_usr\_partition,sda6,r' ]}
812 \item[dhcp] Uncomment the dhcp variable, so that the domain will
813 receive its IP address from a DHCP server (e.g.\ \path{dhcp=``dhcp''})
814 \end{description}
815 \end{quote}
817 You may also want to edit the {\bf vif} variable in order to choose
818 the MAC address of the virtual ethernet interface yourself. For
819 example:
821 \begin{quote}
822 \verb_vif = ['mac=00:16:3E:F6:BB:B3']_
823 \end{quote}
824 If you do not set this variable, \xend\ will automatically generate a
825 random MAC address from the range 00:16:3E:xx:xx:xx, assigned by IEEE to
826 XenSource as an OUI (organizationally unique identifier). XenSource
827 Inc. gives permission for anyone to use addresses randomly allocated
828 from this range for use by their Xen domains.
830 For a list of IEEE OUI assignments, see
831 \url{http://standards.ieee.org/regauth/oui/oui.txt}
834 \subsection{Booting the Guest Domain}
836 The \path{xm} tool provides a variety of commands for managing
837 domains. Use the \path{create} command to start new domains. Assuming
838 you've created a configuration file \path{myvmconf} based around
839 \path{/etc/xen/xmexample2}, to start a domain with virtual machine
840 ID~1 you should type:
842 \begin{quote}
843 \begin{verbatim}
844 # xm create -c myvmconf vmid=1
845 \end{verbatim}
846 \end{quote}
848 The \path{-c} switch causes \path{xm} to turn into the domain's
849 console after creation. The \path{vmid=1} sets the \path{vmid}
850 variable used in the \path{myvmconf} file.
852 You should see the console boot messages from the new domain appearing
853 in the terminal in which you typed the command, culminating in a login
854 prompt.
857 \section{Starting / Stopping Domains Automatically}
859 It is possible to have certain domains start automatically at boot
860 time and to have dom0 wait for all running domains to shutdown before
861 it shuts down the system.
863 To specify a domain is to start at boot-time, place its configuration
864 file (or a link to it) under \path{/etc/xen/auto/}.
866 A Sys-V style init script for Red Hat and LSB-compliant systems is
867 provided and will be automatically copied to \path{/etc/init.d/}
868 during install. You can then enable it in the appropriate way for
869 your distribution.
871 For instance, on Red Hat:
873 \begin{quote}
874 \verb_# chkconfig --add xendomains_
875 \end{quote}
877 By default, this will start the boot-time domains in runlevels 3, 4
878 and 5.
880 You can also use the \path{service} command to run this script
881 manually, e.g:
883 \begin{quote}
884 \verb_# service xendomains start_
886 Starts all the domains with config files under /etc/xen/auto/.
887 \end{quote}
889 \begin{quote}
890 \verb_# service xendomains stop_
892 Shuts down all running Xen domains.
893 \end{quote}
897 \part{Configuration and Management}
899 %% Chapter Domain Management Tools and Daemons
900 \chapter{Domain Management Tools}
902 This chapter summarizes the management software and tools available.
905 \section{\Xend\ }
906 \label{s:xend}
909 The \Xend\ node control daemon performs system management functions
910 related to virtual machines. It forms a central point of control of
911 virtualized resources, and must be running in order to start and manage
912 virtual machines. \Xend\ must be run as root because it needs access to
913 privileged system management functions.
915 An initialization script named \texttt{/etc/init.d/xend} is provided to
916 start \Xend\ at boot time. Use the tool appropriate (i.e. chkconfig) for
917 your Linux distribution to specify the runlevels at which this script
918 should be executed, or manually create symbolic links in the correct
919 runlevel directories.
921 \Xend\ can be started on the command line as well, and supports the
922 following set of parameters:
924 \begin{tabular}{ll}
925 \verb!# xend start! & start \xend, if not already running \\
926 \verb!# xend stop! & stop \xend\ if already running \\
927 \verb!# xend restart! & restart \xend\ if running, otherwise start it \\
928 % \verb!# xend trace_start! & start \xend, with very detailed debug logging \\
929 \verb!# xend status! & indicates \xend\ status by its return code
930 \end{tabular}
932 A SysV init script called {\tt xend} is provided to start \xend\ at
933 boot time. {\tt make install} installs this script in
934 \path{/etc/init.d}. To enable it, you have to make symbolic links in
935 the appropriate runlevel directories or use the {\tt chkconfig} tool,
936 where available. Once \xend\ is running, administration can be done
937 using the \texttt{xm} tool.
939 \subsection{Logging}
941 As \xend\ runs, events will be logged to \path{/var/log/xend.log} and
942 (less frequently) to \path{/var/log/xend-debug.log}. These, along with
943 the standard syslog files, are useful when troubleshooting problems.
945 \subsection{Configuring \Xend\ }
947 \Xend\ is written in Python. At startup, it reads its configuration
948 information from the file \path{/etc/xen/xend-config.sxp}. The Xen
949 installation places an example \texttt{xend-config.sxp} file in the
950 \texttt{/etc/xen} subdirectory which should work for most installations.
952 See the example configuration file \texttt{xend-debug.sxp} and the
953 section 5 man page \texttt{xend-config.sxp} for a full list of
954 parameters and more detailed information. Some of the most important
955 parameters are discussed below.
957 An HTTP interface and a Unix domain socket API are available to
958 communicate with \Xend. This allows remote users to pass commands to the
959 daemon. By default, \Xend does not start an HTTP server. It does start a
960 Unix domain socket management server, as the low level utility
961 \texttt{xm} requires it. For support of cross-machine migration, \Xend\
962 can start a relocation server. This support is not enabled by default
963 for security reasons.
965 Note: the example \texttt{xend} configuration file modifies the defaults and
966 starts up \Xend\ as an HTTP server as well as a relocation server.
968 From the file:
970 \begin{verbatim}
971 #(xend-http-server no)
972 (xend-http-server yes)
973 #(xend-unix-server yes)
974 #(xend-relocation-server no)
975 (xend-relocation-server yes)
976 \end{verbatim}
978 Comment or uncomment lines in that file to disable or enable features
979 that you require.
981 Connections from remote hosts are disabled by default:
983 \begin{verbatim}
984 # Address xend should listen on for HTTP connections, if xend-http-server is
985 # set.
986 # Specifying 'localhost' prevents remote connections.
987 # Specifying the empty string '' (the default) allows all connections.
988 #(xend-address '')
989 (xend-address localhost)
990 \end{verbatim}
992 It is recommended that if migration support is not needed, the
993 \texttt{xend-relocation-server} parameter value be changed to
994 ``\texttt{no}'' or commented out.
996 \section{Xm}
997 \label{s:xm}
999 The xm tool is the primary tool for managing Xen from the console. The
1000 general format of an xm command line is:
1002 \begin{verbatim}
1003 # xm command [switches] [arguments] [variables]
1004 \end{verbatim}
1006 The available \emph{switches} and \emph{arguments} are dependent on the
1007 \emph{command} chosen. The \emph{variables} may be set using
1008 declarations of the form {\tt variable=value} and command line
1009 declarations override any of the values in the configuration file being
1010 used, including the standard variables described above and any custom
1011 variables (for instance, the \path{xmdefconfig} file uses a {\tt vmid}
1012 variable).
1014 For online help for the commands available, type:
1016 \begin{quote}
1017 \begin{verbatim}
1018 # xm help
1019 \end{verbatim}
1020 \end{quote}
1022 This will list the most commonly used commands. The full list can be obtained
1023 using \verb_xm help --long_. You can also type \path{xm help $<$command$>$}
1024 for more information on a given command.
1026 \subsection{Basic Management Commands}
1028 One useful command is \verb_# xm list_ which lists all domains running in rows
1029 of the following format:
1030 \begin{center} {\tt name domid memory vcpus state cputime}
1031 \end{center}
1033 The meaning of each field is as follows:
1034 \begin{quote}
1035 \begin{description}
1036 \item[name] The descriptive name of the virtual machine.
1037 \item[domid] The number of the domain ID this virtual machine is
1038 running in.
1039 \item[memory] Memory size in megabytes.
1040 \item[vcpus] The number of virtual CPUs this domain has.
1041 \item[state] Domain state consists of 5 fields:
1042 \begin{description}
1043 \item[r] running
1044 \item[b] blocked
1045 \item[p] paused
1046 \item[s] shutdown
1047 \item[c] crashed
1048 \end{description}
1049 \item[cputime] How much CPU time (in seconds) the domain has used so
1050 far.
1051 \end{description}
1052 \end{quote}
1054 The \path{xm list} command also supports a long output format when the
1055 \path{-l} switch is used. This outputs the full details of the
1056 running domains in \xend's SXP configuration format.
1059 You can get access to the console of a particular domain using
1060 the \verb_# xm console_ command (e.g.\ \verb_# xm console myVM_).
1064 %% Chapter Domain Configuration
1065 \chapter{Domain Configuration}
1066 \label{cha:config}
1068 The following contains the syntax of the domain configuration files
1069 and description of how to further specify networking, driver domain
1070 and general scheduling behavior.
1073 \section{Configuration Files}
1074 \label{s:cfiles}
1076 Xen configuration files contain the following standard variables.
1077 Unless otherwise stated, configuration items should be enclosed in
1078 quotes: see the configuration scripts in \path{/etc/xen/}
1079 for concrete examples.
1081 \begin{description}
1082 \item[kernel] Path to the kernel image.
1083 \item[ramdisk] Path to a ramdisk image (optional).
1084 % \item[builder] The name of the domain build function (e.g.
1085 % {\tt'linux'} or {\tt'netbsd'}.
1086 \item[memory] Memory size in megabytes.
1087 \item[vcpus] The number of virtual CPUs.
1088 \item[console] Port to export the domain console on (default 9600 +
1089 domain ID).
1090 \item[vif] Network interface configuration. This may simply contain
1091 an empty string for each desired interface, or may override various
1092 settings, e.g.\
1093 \begin{verbatim}
1094 vif = [ 'mac=00:16:3E:00:00:11, bridge=xen-br0',
1095 'bridge=xen-br1' ]
1096 \end{verbatim}
1097 to assign a MAC address and bridge to the first interface and assign
1098 a different bridge to the second interface, leaving \xend\ to choose
1099 the MAC address. The settings that may be overridden in this way are
1100 type, mac, bridge, ip, script, backend, and vifname.
1101 \item[disk] List of block devices to export to the domain e.g.
1102 \verb_disk = [ 'phy:hda1,sda1,r' ]_
1103 exports physical device \path{/dev/hda1} to the domain as
1104 \path{/dev/sda1} with read-only access. Exporting a disk read-write
1105 which is currently mounted is dangerous -- if you are \emph{certain}
1106 you wish to do this, you can specify \path{w!} as the mode.
1107 \item[dhcp] Set to {\tt `dhcp'} if you want to use DHCP to configure
1108 networking.
1109 \item[netmask] Manually configured IP netmask.
1110 \item[gateway] Manually configured IP gateway.
1111 \item[hostname] Set the hostname for the virtual machine.
1112 \item[root] Specify the root device parameter on the kernel command
1113 line.
1114 \item[nfs\_server] IP address for the NFS server (if any).
1115 \item[nfs\_root] Path of the root filesystem on the NFS server (if
1116 any).
1117 \item[extra] Extra string to append to the kernel command line (if
1118 any)
1119 \end{description}
1121 Additional fields are documented in the example configuration files
1122 (e.g. to configure virtual TPM functionality).
1124 For additional flexibility, it is also possible to include Python
1125 scripting commands in configuration files. An example of this is the
1126 \path{xmexample2} file, which uses Python code to handle the
1127 \path{vmid} variable.
1130 %\part{Advanced Topics}
1133 \section{Network Configuration}
1135 For many users, the default installation should work ``out of the
1136 box''. More complicated network setups, for instance with multiple
1137 Ethernet interfaces and/or existing bridging setups will require some
1138 special configuration.
1140 The purpose of this section is to describe the mechanisms provided by
1141 \xend\ to allow a flexible configuration for Xen's virtual networking.
1143 \subsection{Xen virtual network topology}
1145 Each domain network interface is connected to a virtual network
1146 interface in dom0 by a point to point link (effectively a ``virtual
1147 crossover cable''). These devices are named {\tt
1148 vif$<$domid$>$.$<$vifid$>$} (e.g.\ {\tt vif1.0} for the first
1149 interface in domain~1, {\tt vif3.1} for the second interface in
1150 domain~3).
1152 Traffic on these virtual interfaces is handled in domain~0 using
1153 standard Linux mechanisms for bridging, routing, rate limiting, etc.
1154 Xend calls on two shell scripts to perform initial configuration of
1155 the network and configuration of new virtual interfaces. By default,
1156 these scripts configure a single bridge for all the virtual
1157 interfaces. Arbitrary routing / bridging configurations can be
1158 configured by customizing the scripts, as described in the following
1159 section.
1161 \subsection{Xen networking scripts}
1163 Xen's virtual networking is configured by two shell scripts (by
1164 default \path{network-bridge} and \path{vif-bridge}). These are called
1165 automatically by \xend\ when certain events occur, with arguments to
1166 the scripts providing further contextual information. These scripts
1167 are found by default in \path{/etc/xen/scripts}. The names and
1168 locations of the scripts can be configured in
1169 \path{/etc/xen/xend-config.sxp}.
1171 \begin{description}
1172 \item[network-bridge:] This script is called whenever \xend\ is started or
1173 stopped to respectively initialize or tear down the Xen virtual
1174 network. In the default configuration initialization creates the
1175 bridge `xen-br0' and moves eth0 onto that bridge, modifying the
1176 routing accordingly. When \xend\ exits, it deletes the Xen bridge
1177 and removes eth0, restoring the normal IP and routing configuration.
1179 %% In configurations where the bridge already exists, this script
1180 %% could be replaced with a link to \path{/bin/true} (for instance).
1182 \item[vif-bridge:] This script is called for every domain virtual
1183 interface and can configure firewalling rules and add the vif to the
1184 appropriate bridge. By default, this adds and removes VIFs on the
1185 default Xen bridge.
1186 \end{description}
1188 Other example scripts are available (\path{network-route} and
1189 \path{vif-route}, \path{network-nat} and \path{vif-nat}).
1190 For more complex network setups (e.g.\ where routing is required or
1191 integrate with existing bridges) these scripts may be replaced with
1192 customized variants for your site's preferred configuration.
1194 %% There are two possible types of privileges: IO privileges and
1195 %% administration privileges.
1200 % Chapter Storage and FileSytem Management
1201 \chapter{Storage and File System Management}
1203 Storage can be made available to virtual machines in a number of
1204 different ways. This chapter covers some possible configurations.
1206 The most straightforward method is to export a physical block device (a
1207 hard drive or partition) from dom0 directly to the guest domain as a
1208 virtual block device (VBD).
1210 Storage may also be exported from a filesystem image or a partitioned
1211 filesystem image as a \emph{file-backed VBD}.
1213 Finally, standard network storage protocols such as NBD, iSCSI, NFS,
1214 etc., can be used to provide storage to virtual machines.
1217 \section{Exporting Physical Devices as VBDs}
1218 \label{s:exporting-physical-devices-as-vbds}
1220 One of the simplest configurations is to directly export individual
1221 partitions from domain~0 to other domains. To achieve this use the
1222 \path{phy:} specifier in your domain configuration file. For example a
1223 line like
1224 \begin{quote}
1225 \verb_disk = ['phy:hda3,sda1,w']_
1226 \end{quote}
1227 specifies that the partition \path{/dev/hda3} in domain~0 should be
1228 exported read-write to the new domain as \path{/dev/sda1}; one could
1229 equally well export it as \path{/dev/hda} or \path{/dev/sdb5} should
1230 one wish.
1232 In addition to local disks and partitions, it is possible to export
1233 any device that Linux considers to be ``a disk'' in the same manner.
1234 For example, if you have iSCSI disks or GNBD volumes imported into
1235 domain~0 you can export these to other domains using the \path{phy:}
1236 disk syntax. E.g.:
1237 \begin{quote}
1238 \verb_disk = ['phy:vg/lvm1,sda2,w']_
1239 \end{quote}
1241 \begin{center}
1242 \framebox{\bf Warning: Block device sharing}
1243 \end{center}
1244 \begin{quote}
1245 Block devices should typically only be shared between domains in a
1246 read-only fashion otherwise the Linux kernel's file systems will get
1247 very confused as the file system structure may change underneath
1248 them (having the same ext3 partition mounted \path{rw} twice is a
1249 sure fire way to cause irreparable damage)! \Xend\ will attempt to
1250 prevent you from doing this by checking that the device is not
1251 mounted read-write in domain~0, and hasn't already been exported
1252 read-write to another domain. If you want read-write sharing,
1253 export the directory to other domains via NFS from domain~0 (or use
1254 a cluster file system such as GFS or ocfs2).
1255 \end{quote}
1258 \section{Using File-backed VBDs}
1260 It is also possible to use a file in Domain~0 as the primary storage
1261 for a virtual machine. As well as being convenient, this also has the
1262 advantage that the virtual block device will be \emph{sparse} ---
1263 space will only really be allocated as parts of the file are used. So
1264 if a virtual machine uses only half of its disk space then the file
1265 really takes up half of the size allocated.
1267 For example, to create a 2GB sparse file-backed virtual block device
1268 (actually only consumes 1KB of disk):
1269 \begin{quote}
1270 \verb_# dd if=/dev/zero of=vm1disk bs=1k seek=2048k count=1_
1271 \end{quote}
1273 Make a file system in the disk file:
1274 \begin{quote}
1275 \verb_# mkfs -t ext3 vm1disk_
1276 \end{quote}
1278 (when the tool asks for confirmation, answer `y')
1280 Populate the file system e.g.\ by copying from the current root:
1281 \begin{quote}
1282 \begin{verbatim}
1283 # mount -o loop vm1disk /mnt
1284 # cp -ax /{root,dev,var,etc,usr,bin,sbin,lib} /mnt
1285 # mkdir /mnt/{proc,sys,home,tmp}
1286 \end{verbatim}
1287 \end{quote}
1289 Tailor the file system by editing \path{/etc/fstab},
1290 \path{/etc/hostname}, etc.\ Don't forget to edit the files in the
1291 mounted file system, instead of your domain~0 filesystem, e.g.\ you
1292 would edit \path{/mnt/etc/fstab} instead of \path{/etc/fstab}. For
1293 this example put \path{/dev/sda1} to root in fstab.
1295 Now unmount (this is important!):
1296 \begin{quote}
1297 \verb_# umount /mnt_
1298 \end{quote}
1300 In the configuration file set:
1301 \begin{quote}
1302 \verb_disk = ['file:/full/path/to/vm1disk,sda1,w']_
1303 \end{quote}
1305 As the virtual machine writes to its `disk', the sparse file will be
1306 filled in and consume more space up to the original 2GB.
1308 {\bf Note that file-backed VBDs may not be appropriate for backing
1309 I/O-intensive domains.} File-backed VBDs are known to experience
1310 substantial slowdowns under heavy I/O workloads, due to the I/O
1311 handling by the loopback block device used to support file-backed VBDs
1312 in dom0. Better I/O performance can be achieved by using either
1313 LVM-backed VBDs (Section~\ref{s:using-lvm-backed-vbds}) or physical
1314 devices as VBDs (Section~\ref{s:exporting-physical-devices-as-vbds}).
1316 Linux supports a maximum of eight file-backed VBDs across all domains
1317 by default. This limit can be statically increased by using the
1318 \emph{max\_loop} module parameter if CONFIG\_BLK\_DEV\_LOOP is
1319 compiled as a module in the dom0 kernel, or by using the
1320 \emph{max\_loop=n} boot option if CONFIG\_BLK\_DEV\_LOOP is compiled
1321 directly into the dom0 kernel.
1324 \section{Using LVM-backed VBDs}
1325 \label{s:using-lvm-backed-vbds}
1327 A particularly appealing solution is to use LVM volumes as backing for
1328 domain file-systems since this allows dynamic growing/shrinking of
1329 volumes as well as snapshot and other features.
1331 To initialize a partition to support LVM volumes:
1332 \begin{quote}
1333 \begin{verbatim}
1334 # pvcreate /dev/sda10
1335 \end{verbatim}
1336 \end{quote}
1338 Create a volume group named `vg' on the physical partition:
1339 \begin{quote}
1340 \begin{verbatim}
1341 # vgcreate vg /dev/sda10
1342 \end{verbatim}
1343 \end{quote}
1345 Create a logical volume of size 4GB named `myvmdisk1':
1346 \begin{quote}
1347 \begin{verbatim}
1348 # lvcreate -L4096M -n myvmdisk1 vg
1349 \end{verbatim}
1350 \end{quote}
1352 You should now see that you have a \path{/dev/vg/myvmdisk1} Make a
1353 filesystem, mount it and populate it, e.g.:
1354 \begin{quote}
1355 \begin{verbatim}
1356 # mkfs -t ext3 /dev/vg/myvmdisk1
1357 # mount /dev/vg/myvmdisk1 /mnt
1358 # cp -ax / /mnt
1359 # umount /mnt
1360 \end{verbatim}
1361 \end{quote}
1363 Now configure your VM with the following disk configuration:
1364 \begin{quote}
1365 \begin{verbatim}
1366 disk = [ 'phy:vg/myvmdisk1,sda1,w' ]
1367 \end{verbatim}
1368 \end{quote}
1370 LVM enables you to grow the size of logical volumes, but you'll need
1371 to resize the corresponding file system to make use of the new space.
1372 Some file systems (e.g.\ ext3) now support online resize. See the LVM
1373 manuals for more details.
1375 You can also use LVM for creating copy-on-write (CoW) clones of LVM
1376 volumes (known as writable persistent snapshots in LVM terminology).
1377 This facility is new in Linux 2.6.8, so isn't as stable as one might
1378 hope. In particular, using lots of CoW LVM disks consumes a lot of
1379 dom0 memory, and error conditions such as running out of disk space
1380 are not handled well. Hopefully this will improve in future.
1382 To create two copy-on-write clones of the above file system you would
1383 use the following commands:
1385 \begin{quote}
1386 \begin{verbatim}
1387 # lvcreate -s -L1024M -n myclonedisk1 /dev/vg/myvmdisk1
1388 # lvcreate -s -L1024M -n myclonedisk2 /dev/vg/myvmdisk1
1389 \end{verbatim}
1390 \end{quote}
1392 Each of these can grow to have 1GB of differences from the master
1393 volume. You can grow the amount of space for storing the differences
1394 using the lvextend command, e.g.:
1395 \begin{quote}
1396 \begin{verbatim}
1397 # lvextend +100M /dev/vg/myclonedisk1
1398 \end{verbatim}
1399 \end{quote}
1401 Don't let the `differences volume' ever fill up otherwise LVM gets
1402 rather confused. It may be possible to automate the growing process by
1403 using \path{dmsetup wait} to spot the volume getting full and then
1404 issue an \path{lvextend}.
1406 In principle, it is possible to continue writing to the volume that
1407 has been cloned (the changes will not be visible to the clones), but
1408 we wouldn't recommend this: have the cloned volume as a `pristine'
1409 file system install that isn't mounted directly by any of the virtual
1410 machines.
1413 \section{Using NFS Root}
1415 First, populate a root filesystem in a directory on the server
1416 machine. This can be on a distinct physical machine, or simply run
1417 within a virtual machine on the same node.
1419 Now configure the NFS server to export this filesystem over the
1420 network by adding a line to \path{/etc/exports}, for instance:
1422 \begin{quote}
1423 \begin{small}
1424 \begin{verbatim}
1425 /export/vm1root 1.2.3.4/24 (rw,sync,no_root_squash)
1426 \end{verbatim}
1427 \end{small}
1428 \end{quote}
1430 Finally, configure the domain to use NFS root. In addition to the
1431 normal variables, you should make sure to set the following values in
1432 the domain's configuration file:
1434 \begin{quote}
1435 \begin{small}
1436 \begin{verbatim}
1437 root = '/dev/nfs'
1438 nfs_server = '2.3.4.5' # substitute IP address of server
1439 nfs_root = '/path/to/root' # path to root FS on the server
1440 \end{verbatim}
1441 \end{small}
1442 \end{quote}
1444 The domain will need network access at boot time, so either statically
1445 configure an IP address using the config variables \path{ip},
1446 \path{netmask}, \path{gateway}, \path{hostname}; or enable DHCP
1447 (\path{dhcp='dhcp'}).
1449 Note that the Linux NFS root implementation is known to have stability
1450 problems under high load (this is not a Xen-specific problem), so this
1451 configuration may not be appropriate for critical servers.
1454 \chapter{CPU Management}
1456 %% KMS Something sage about CPU / processor management.
1458 Xen allows a domain's virtual CPU(s) to be associated with one or more
1459 host CPUs. This can be used to allocate real resources among one or
1460 more guests, or to make optimal use of processor resources when
1461 utilizing dual-core, hyperthreading, or other advanced CPU technologies.
1463 Xen enumerates physical CPUs in a `depth first' fashion. For a system
1464 with both hyperthreading and multiple cores, this would be all the
1465 hyperthreads on a given core, then all the cores on a given socket,
1466 and then all sockets. I.e. if you had a two socket, dual core,
1467 hyperthreaded Xeon the CPU order would be:
1470 \begin{center}
1471 \begin{tabular}{l|l|l|l|l|l|l|r}
1472 \multicolumn{4}{c|}{socket0} & \multicolumn{4}{c}{socket1} \\ \hline
1473 \multicolumn{2}{c|}{core0} & \multicolumn{2}{c|}{core1} &
1474 \multicolumn{2}{c|}{core0} & \multicolumn{2}{c}{core1} \\ \hline
1475 ht0 & ht1 & ht0 & ht1 & ht0 & ht1 & ht0 & ht1 \\
1476 \#0 & \#1 & \#2 & \#3 & \#4 & \#5 & \#6 & \#7 \\
1477 \end{tabular}
1478 \end{center}
1481 Having multiple vcpus belonging to the same domain mapped to the same
1482 physical CPU is very likely to lead to poor performance. It's better to
1483 use `vcpus-set' to hot-unplug one of the vcpus and ensure the others are
1484 pinned on different CPUs.
1486 If you are running IO intensive tasks, its typically better to dedicate
1487 either a hyperthread or whole core to running domain 0, and hence pin
1488 other domains so that they can't use CPU 0. If your workload is mostly
1489 compute intensive, you may want to pin vcpus such that all physical CPU
1490 threads are available for guest domains.
1492 \chapter{Migrating Domains}
1494 \section{Domain Save and Restore}
1496 The administrator of a Xen system may suspend a virtual machine's
1497 current state into a disk file in domain~0, allowing it to be resumed at
1498 a later time.
1500 For example you can suspend a domain called ``VM1'' to disk using the
1501 command:
1502 \begin{verbatim}
1503 # xm save VM1 VM1.chk
1504 \end{verbatim}
1506 This will stop the domain named ``VM1'' and save its current state
1507 into a file called \path{VM1.chk}.
1509 To resume execution of this domain, use the \path{xm restore} command:
1510 \begin{verbatim}
1511 # xm restore VM1.chk
1512 \end{verbatim}
1514 This will restore the state of the domain and resume its execution.
1515 The domain will carry on as before and the console may be reconnected
1516 using the \path{xm console} command, as described earlier.
1518 \section{Migration and Live Migration}
1520 Migration is used to transfer a domain between physical hosts. There
1521 are two varieties: regular and live migration. The former moves a
1522 virtual machine from one host to another by pausing it, copying its
1523 memory contents, and then resuming it on the destination. The latter
1524 performs the same logical functionality but without needing to pause
1525 the domain for the duration. In general when performing live migration
1526 the domain continues its usual activities and---from the user's
1527 perspective---the migration should be imperceptible.
1529 To perform a live migration, both hosts must be running Xen / \xend\ and
1530 the destination host must have sufficient resources (e.g.\ memory
1531 capacity) to accommodate the domain after the move. Furthermore we
1532 currently require both source and destination machines to be on the same
1533 L2 subnet.
1535 Currently, there is no support for providing automatic remote access
1536 to filesystems stored on local disk when a domain is migrated.
1537 Administrators should choose an appropriate storage solution (i.e.\
1538 SAN, NAS, etc.) to ensure that domain filesystems are also available
1539 on their destination node. GNBD is a good method for exporting a
1540 volume from one machine to another. iSCSI can do a similar job, but is
1541 more complex to set up.
1543 When a domain migrates, it's MAC and IP address move with it, thus it is
1544 only possible to migrate VMs within the same layer-2 network and IP
1545 subnet. If the destination node is on a different subnet, the
1546 administrator would need to manually configure a suitable etherip or IP
1547 tunnel in the domain~0 of the remote node.
1549 A domain may be migrated using the \path{xm migrate} command. To live
1550 migrate a domain to another machine, we would use the command:
1552 \begin{verbatim}
1553 # xm migrate --live mydomain destination.ournetwork.com
1554 \end{verbatim}
1556 Without the \path{--live} flag, \xend\ simply stops the domain and
1557 copies the memory image over to the new node and restarts it. Since
1558 domains can have large allocations this can be quite time consuming,
1559 even on a Gigabit network. With the \path{--live} flag \xend\ attempts
1560 to keep the domain running while the migration is in progress, resulting
1561 in typical down times of just 60--300ms.
1563 For now it will be necessary to reconnect to the domain's console on the
1564 new machine using the \path{xm console} command. If a migrated domain
1565 has any open network connections then they will be preserved, so SSH
1566 connections do not have this limitation.
1569 %% Chapter Securing Xen
1570 \chapter{Securing Xen}
1572 This chapter describes how to secure a Xen system. It describes a number
1573 of scenarios and provides a corresponding set of best practices. It
1574 begins with a section devoted to understanding the security implications
1575 of a Xen system.
1578 \section{Xen Security Considerations}
1580 When deploying a Xen system, one must be sure to secure the management
1581 domain (Domain-0) as much as possible. If the management domain is
1582 compromised, all other domains are also vulnerable. The following are a
1583 set of best practices for Domain-0:
1585 \begin{enumerate}
1586 \item \textbf{Run the smallest number of necessary services.} The less
1587 things that are present in a management partition, the better.
1588 Remember, a service running as root in the management domain has full
1589 access to all other domains on the system.
1590 \item \textbf{Use a firewall to restrict the traffic to the management
1591 domain.} A firewall with default-reject rules will help prevent
1592 attacks on the management domain.
1593 \item \textbf{Do not allow users to access Domain-0.} The Linux kernel
1594 has been known to have local-user root exploits. If you allow normal
1595 users to access Domain-0 (even as unprivileged users) you run the risk
1596 of a kernel exploit making all of your domains vulnerable.
1597 \end{enumerate}
1599 \section{Security Scenarios}
1602 \subsection{The Isolated Management Network}
1604 In this scenario, each node has two network cards in the cluster. One
1605 network card is connected to the outside world and one network card is a
1606 physically isolated management network specifically for Xen instances to
1607 use.
1609 As long as all of the management partitions are trusted equally, this is
1610 the most secure scenario. No additional configuration is needed other
1611 than forcing Xend to bind to the management interface for relocation.
1614 \subsection{A Subnet Behind a Firewall}
1616 In this scenario, each node has only one network card but the entire
1617 cluster sits behind a firewall. This firewall should do at least the
1618 following:
1620 \begin{enumerate}
1621 \item Prevent IP spoofing from outside of the subnet.
1622 \item Prevent access to the relocation port of any of the nodes in the
1623 cluster except from within the cluster.
1624 \end{enumerate}
1626 The following iptables rules can be used on each node to prevent
1627 migrations to that node from outside the subnet assuming the main
1628 firewall does not do this for you:
1630 \begin{verbatim}
1631 # this command disables all access to the Xen relocation
1632 # port:
1633 iptables -A INPUT -p tcp --destination-port 8002 -j REJECT
1635 # this command enables Xen relocations only from the specific
1636 # subnet:
1637 iptables -I INPUT -p tcp -{}-source 192.168.1.1/8 \
1638 --destination-port 8002 -j ACCEPT
1639 \end{verbatim}
1641 \subsection{Nodes on an Untrusted Subnet}
1643 Migration on an untrusted subnet is not safe in current versions of Xen.
1644 It may be possible to perform migrations through a secure tunnel via an
1645 VPN or SSH. The only safe option in the absence of a secure tunnel is to
1646 disable migration completely. The easiest way to do this is with
1647 iptables:
1649 \begin{verbatim}
1650 # this command disables all access to the Xen relocation port
1651 iptables -A INPUT -p tcp -{}-destination-port 8002 -j REJECT
1652 \end{verbatim}
1654 \part{Reference}
1656 %% Chapter Build and Boot Options
1657 \chapter{Build and Boot Options}
1659 This chapter describes the build- and boot-time options which may be
1660 used to tailor your Xen system.
1662 \section{Top-level Configuration Options}
1664 Top-level configuration is achieved by editing one of two
1665 files: \path{Config.mk} and \path{Makefile}.
1667 The former allows the overall build target architecture to be
1668 specified. You will typically not need to modify this unless
1669 you are cross-compiling or if you wish to build a PAE-enabled
1670 Xen system. Additional configuration options are documented
1671 in the \path{Config.mk} file.
1673 The top-level \path{Makefile} is chiefly used to customize the set of
1674 kernels built. Look for the line:
1675 \begin{quote}
1676 \begin{verbatim}
1677 KERNELS ?= linux-2.6-xen0 linux-2.6-xenU
1678 \end{verbatim}
1679 \end{quote}
1681 Allowable options here are any kernels which have a corresponding
1682 build configuration file in the \path{buildconfigs/} directory.
1686 \section{Xen Build Options}
1688 Xen provides a number of build-time options which should be set as
1689 environment variables or passed on make's command-line.
1691 \begin{description}
1692 \item[verbose=y] Enable debugging messages when Xen detects an
1693 unexpected condition. Also enables console output from all domains.
1694 \item[debug=y] Enable debug assertions. Implies {\bf verbose=y}.
1695 (Primarily useful for tracing bugs in Xen).
1696 \item[debugger=y] Enable the in-Xen debugger. This can be used to
1697 debug Xen, guest OSes, and applications.
1698 \item[perfc=y] Enable performance counters for significant events
1699 within Xen. The counts can be reset or displayed on Xen's console
1700 via console control keys.
1701 \end{description}
1704 \section{Xen Boot Options}
1705 \label{s:xboot}
1707 These options are used to configure Xen's behaviour at runtime. They
1708 should be appended to Xen's command line, either manually or by
1709 editing \path{grub.conf}.
1711 \begin{description}
1712 \item [ noreboot ] Don't reboot the machine automatically on errors.
1713 This is useful to catch debug output if you aren't catching console
1714 messages via the serial line.
1715 \item [ nosmp ] Disable SMP support. This option is implied by
1716 `ignorebiostables'.
1717 \item [ watchdog ] Enable NMI watchdog which can report certain
1718 failures.
1719 \item [ noirqbalance ] Disable software IRQ balancing and affinity.
1720 This can be used on systems such as Dell 1850/2850 that have
1721 workarounds in hardware for IRQ-routing issues.
1722 \item [ badpage=$<$page number$>$,$<$page number$>$, \ldots ] Specify
1723 a list of pages not to be allocated for use because they contain bad
1724 bytes. For example, if your memory tester says that byte 0x12345678
1725 is bad, you would place `badpage=0x12345' on Xen's command line.
1726 \item [ com1=$<$baud$>$,DPS,$<$io\_base$>$,$<$irq$>$
1727 com2=$<$baud$>$,DPS,$<$io\_base$>$,$<$irq$>$ ] \mbox{}\\
1728 Xen supports up to two 16550-compatible serial ports. For example:
1729 `com1=9600, 8n1, 0x408, 5' maps COM1 to a 9600-baud port, 8 data
1730 bits, no parity, 1 stop bit, I/O port base 0x408, IRQ 5. If some
1731 configuration options are standard (e.g., I/O base and IRQ), then
1732 only a prefix of the full configuration string need be specified. If
1733 the baud rate is pre-configured (e.g., by the bootloader) then you
1734 can specify `auto' in place of a numeric baud rate.
1735 \item [ console=$<$specifier list$>$ ] Specify the destination for Xen
1736 console I/O. This is a comma-separated list of, for example:
1737 \begin{description}
1738 \item[ vga ] Use VGA console and allow keyboard input.
1739 \item[ com1 ] Use serial port com1.
1740 \item[ com2H ] Use serial port com2. Transmitted chars will have the
1741 MSB set. Received chars must have MSB set.
1742 \item[ com2L] Use serial port com2. Transmitted chars will have the
1743 MSB cleared. Received chars must have MSB cleared.
1744 \end{description}
1745 The latter two examples allow a single port to be shared by two
1746 subsystems (e.g.\ console and debugger). Sharing is controlled by
1747 MSB of each transmitted/received character. [NB. Default for this
1748 option is `com1,vga']
1749 \item [ sync\_console ] Force synchronous console output. This is
1750 useful if you system fails unexpectedly before it has sent all
1751 available output to the console. In most cases Xen will
1752 automatically enter synchronous mode when an exceptional event
1753 occurs, but this option provides a manual fallback.
1754 \item [ conswitch=$<$switch-char$><$auto-switch-char$>$ ] Specify how
1755 to switch serial-console input between Xen and DOM0. The required
1756 sequence is CTRL-$<$switch-char$>$ pressed three times. Specifying
1757 the backtick character disables switching. The
1758 $<$auto-switch-char$>$ specifies whether Xen should auto-switch
1759 input to DOM0 when it boots --- if it is `x' then auto-switching is
1760 disabled. Any other value, or omitting the character, enables
1761 auto-switching. [NB. Default switch-char is `a'.]
1762 \item [ nmi=xxx ]
1763 Specify what to do with an NMI parity or I/O error. \\
1764 `nmi=fatal': Xen prints a diagnostic and then hangs. \\
1765 `nmi=dom0': Inform DOM0 of the NMI. \\
1766 `nmi=ignore': Ignore the NMI.
1767 \item [ mem=xxx ] Set the physical RAM address limit. Any RAM
1768 appearing beyond this physical address in the memory map will be
1769 ignored. This parameter may be specified with a B, K, M or G suffix,
1770 representing bytes, kilobytes, megabytes and gigabytes respectively.
1771 The default unit, if no suffix is specified, is kilobytes.
1772 \item [ dom0\_mem=xxx ] Set the amount of memory to be allocated to
1773 domain0. In Xen 3.x the parameter may be specified with a B, K, M or
1774 G suffix, representing bytes, kilobytes, megabytes and gigabytes
1775 respectively; if no suffix is specified, the parameter defaults to
1776 kilobytes. In previous versions of Xen, suffixes were not supported
1777 and the value is always interpreted as kilobytes.
1778 \item [ tbuf\_size=xxx ] Set the size of the per-cpu trace buffers, in
1779 pages (default 1). Note that the trace buffers are only enabled in
1780 debug builds. Most users can ignore this feature completely.
1781 \item [ sched=xxx ] Select the CPU scheduler Xen should use. The
1782 current possibilities are `sedf' (default) and `bvt'.
1783 \item [ apic\_verbosity=debug,verbose ] Print more detailed
1784 information about local APIC and IOAPIC configuration.
1785 \item [ lapic ] Force use of local APIC even when left disabled by
1786 uniprocessor BIOS.
1787 \item [ nolapic ] Ignore local APIC in a uniprocessor system, even if
1788 enabled by the BIOS.
1789 \item [ apic=bigsmp,default,es7000,summit ] Specify NUMA platform.
1790 This can usually be probed automatically.
1791 \end{description}
1793 In addition, the following options may be specified on the Xen command
1794 line. Since domain 0 shares responsibility for booting the platform,
1795 Xen will automatically propagate these options to its command line.
1796 These options are taken from Linux's command-line syntax with
1797 unchanged semantics.
1799 \begin{description}
1800 \item [ acpi=off,force,strict,ht,noirq,\ldots ] Modify how Xen (and
1801 domain 0) parses the BIOS ACPI tables.
1802 \item [ acpi\_skip\_timer\_override ] Instruct Xen (and domain~0) to
1803 ignore timer-interrupt override instructions specified by the BIOS
1804 ACPI tables.
1805 \item [ noapic ] Instruct Xen (and domain~0) to ignore any IOAPICs
1806 that are present in the system, and instead continue to use the
1807 legacy PIC.
1808 \end{description}
1811 \section{XenLinux Boot Options}
1813 In addition to the standard Linux kernel boot options, we support:
1814 \begin{description}
1815 \item[ xencons=xxx ] Specify the device node to which the Xen virtual
1816 console driver is attached. The following options are supported:
1817 \begin{center}
1818 \begin{tabular}{l}
1819 `xencons=off': disable virtual console \\
1820 `xencons=tty': attach console to /dev/tty1 (tty0 at boot-time) \\
1821 `xencons=ttyS': attach console to /dev/ttyS0
1822 \end{tabular}
1823 \end{center}
1824 The default is ttyS for dom0 and tty for all other domains.
1825 \end{description}
1828 %% Chapter Further Support
1829 \chapter{Further Support}
1831 If you have questions that are not answered by this manual, the
1832 sources of information listed below may be of interest to you. Note
1833 that bug reports, suggestions and contributions related to the
1834 software (or the documentation) should be sent to the Xen developers'
1835 mailing list (address below).
1838 \section{Other Documentation}
1840 For developers interested in porting operating systems to Xen, the
1841 \emph{Xen Interface Manual} is distributed in the \path{docs/}
1842 directory of the Xen source distribution.
1845 \section{Online References}
1847 The official Xen web site can be found at:
1848 \begin{quote} {\tt http://www.xensource.com}
1849 \end{quote}
1851 This contains links to the latest versions of all online
1852 documentation, including the latest version of the FAQ.
1854 Information regarding Xen is also available at the Xen Wiki at
1855 \begin{quote} {\tt http://wiki.xensource.com/xenwiki/}\end{quote}
1856 The Xen project uses Bugzilla as its bug tracking system. You'll find
1857 the Xen Bugzilla at http://bugzilla.xensource.com/bugzilla/.
1860 \section{Mailing Lists}
1862 There are several mailing lists that are used to discuss Xen related
1863 topics. The most widely relevant are listed below. An official page of
1864 mailing lists and subscription information can be found at \begin{quote}
1865 {\tt http://lists.xensource.com/} \end{quote}
1867 \begin{description}
1868 \item[xen-devel@lists.xensource.com] Used for development
1869 discussions and bug reports. Subscribe at: \\
1870 {\small {\tt http://lists.xensource.com/xen-devel}}
1871 \item[xen-users@lists.xensource.com] Used for installation and usage
1872 discussions and requests for help. Subscribe at: \\
1873 {\small {\tt http://lists.xensource.com/xen-users}}
1874 \item[xen-announce@lists.xensource.com] Used for announcements only.
1875 Subscribe at: \\
1876 {\small {\tt http://lists.xensource.com/xen-announce}}
1877 \item[xen-changelog@lists.xensource.com] Changelog feed
1878 from the unstable and 2.0 trees - developer oriented. Subscribe at: \\
1879 {\small {\tt http://lists.xensource.com/xen-changelog}}
1880 \end{description}
1884 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
1886 \appendix
1888 \chapter{Unmodified (VMX) guest domains in Xen with Intel\textregistered Virtualization Technology (VT)}
1890 Xen supports guest domains running unmodified Guest operating systems using Virtualization Technology (VT) available on recent Intel Processors. More information about the Intel Virtualization Technology implementing Virtual Machine Extensions (VMX) in the processor is available on the Intel website at \\
1891 {\small {\tt http://www.intel.com/technology/computing/vptech}}
1893 \section{Building Xen with VT support}
1895 The following packages need to be installed in order to build Xen with VT support. Some Linux distributions do not provide these packages by default.
1897 \begin{tabular}{lp{11.0cm}}
1898 {\bfseries Package} & {\bfseries Description} \\
1900 dev86 & The dev86 package provides an assembler and linker for real mode 80x86 instructions. You need to have this package installed in order to build the BIOS code which runs in (virtual) real mode.
1902 If the dev86 package is not available on the x86\_64 distribution, you can install the i386 version of it. The dev86 rpm package for various distributions can be found at {\scriptsize {\tt http://www.rpmfind.net/linux/rpm2html/search.php?query=dev86\&submit=Search}} \\
1904 LibVNCServer & The unmodified guest's VGA display, keyboard, and mouse are virtualized using the vncserver library provided by this package. You can get the sources of libvncserver from {\small {\tt http://sourceforge.net/projects/libvncserver}}. Build and install the sources on the build system to get the libvncserver library. The 0.8pre version of libvncserver is currently working well with Xen.\\
1906 SDL-devel, SDL & Simple DirectMedia Layer (SDL) is another way of virtualizing the unmodified guest console. It provides an X window for the guest console.
1908 If the SDL and SDL-devel packages are not installed by default on the build system, they can be obtained from {\scriptsize {\tt http://www.rpmfind.net/linux/rpm2html/search.php?query=SDL\&amp;submit=Search}}
1909 , {\scriptsize {\tt http://www.rpmfind.net/linux/rpm2html/search.php?query=SDL-devel\&submit=Search}} \\
1911 \end{tabular}
1913 \section{Configuration file for unmodified VMX guests}
1915 The Xen installation includes a sample configuration file, {\small {\tt /etc/xen/xmexample.vmx}}. There are comments describing all the options. In addition to the common options that are the same as those for paravirtualized guest configurations, VMX guest configurations have the following settings:
1917 \begin{tabular}{lp{11.0cm}}
1919 {\bfseries Parameter} & {\bfseries Description} \\
1921 kernel & The VMX firmware loader, {\small {\tt /usr/lib/xen/boot/vmxloader}}\\
1923 builder & The domain build function. The VMX domain uses the vmx builder.\\
1925 acpi & Enable VMX guest ACPI, default=0 (disabled)\\
1927 apic & Enable VMX guest APIC, default=0 (disabled)\\
1929 vif & Optionally defines MAC address and/or bridge for the network interfaces. Random MACs are assigned if not given. {\small {\tt type=ioemu}} means ioemu is used to virtualize the VMX NIC. If no type is specified, vbd is used, as with paravirtualized guests.\\
1931 disk & Defines the disk devices you want the domain to have access to, and what you want them accessible as. If using a physical device as the VMX guest's disk, each disk entry is of the form
1933 {\small {\tt phy:UNAME,ioemu:DEV,MODE,}}
1935 where UNAME is the device, DEV is the device name the domain will see, and MODE is r for read-only, w for read-write. ioemu means the disk will use ioemu to virtualize the VMX disk. If not adding ioemu, it uses vbd like paravirtualized guests.
1937 If using disk image file, its form should be like
1939 {\small {\tt file:FILEPATH,ioemu:DEV,MODE}}
1941 If using more than one disk, there should be a comma between each disk entry. For example:
1943 {\scriptsize {\tt disk = ['file:/var/images/image1.img,ioemu:hda,w', 'file:/var/images/image2.img,ioemu:hdb,w']}}\\
1945 cdrom & Disk image for CD-ROM. The default is {\small {\tt /dev/cdrom}} for Domain0. Inside the VMX domain, the CD-ROM will available as device {\small {\tt /dev/hdc}}. The entry can also point to an ISO file.\\
1947 boot & Boot from floppy (a), hard disk (c) or CD-ROM (d). For example, to boot from CD-ROM, the entry should be:
1949 boot='d'\\
1951 device\_model & The device emulation tool for VMX guests. This parameter should not be changed.\\
1953 sdl & Enable SDL library for graphics, default = 0 (disabled)\\
1955 vnc & Enable VNC library for graphics, default = 1 (enabled)\\
1957 vncviewer & Enable spawning of the vncviewer (only valid when vnc=1), default = 1 (enabled)
1959 If vnc=1 and vncviewer=0, user can use vncviewer to manually connect VMX from remote. For example:
1961 {\small {\tt vncviewer domain0\_IP\_address:VMX\_domain\_id}} \\
1963 ne2000 & Enable ne2000, default = 0 (disabled; use pcnet)\\
1965 serial & Enable redirection of VMX serial output to pty device\\
1967 localtime & Set the real time clock to local time [default=0, that is, set to UTC].\\
1969 enable-audio & Enable audio support. This is under development.\\
1971 full-screen & Start in full screen. This is under development.\\
1973 nographic & Another way to redirect serial output. If enabled, no 'sdl' or 'vnc' can work. Not recommended.\\
1975 \end{tabular}
1978 \section{Creating virtual disks from scratch}
1979 \subsection{Using physical disks}
1980 If you are using a physical disk or physical disk partition, you need to install a Linux OS on the disk first. Then the boot loader should be installed in the correct place. For example {\small {\tt dev/sda}} for booting from the whole disk, or {\small {\tt /dev/sda1}} for booting from partition 1.
1982 \subsection{Using disk image files}
1983 You need to create a large empty disk image file first; then, you need to install a Linux OS onto it. There are two methods you can choose. One is directly installing it using a VMX guest while booting from the OS installation CD-ROM. The other is copying an installed OS into it. The boot loader will also need to be installed.
1985 \subsubsection*{To create the image file:}
1986 The image size should be big enough to accommodate the entire OS. This example assumes the size is 1G (which is probably too small for most OSes).
1988 {\small {\tt \# dd if=/dev/zero of=hd.img bs=1M count=1 seek=1023}}
1990 \subsubsection*{To directly install Linux OS into an image file using a VMX guest:}
1992 Install Xen and create VMX with the original image file with booting from CD-ROM. Then it is just like a normal Linux OS installation. The VMX configuration file should have these two entries before creating:
1994 {\small {\tt cdrom='/dev/cdrom'
1995 boot='d'}}
1997 If this method does not succeed, you can choose the following method of copying an installed Linux OS into an image file.
1999 \subsubsection*{To copy a installed OS into an image file:}
2000 Directly installing is an easier way to make partitions and install an OS in a disk image file. But if you want to create a specific OS in your disk image, then you will most likely want to use this method.
2002 \begin{enumerate}
2003 \item {\bfseries Install a normal Linux OS on the host machine}\\
2004 You can choose any way to install Linux, such as using yum to install Red Hat Linux or YAST to install Novell SuSE Linux. The rest of this example assumes the Linux OS is installed in {\small {\tt /var/guestos/}}.
2006 \item {\bfseries Make the partition table}\\
2007 The image file will be treated as hard disk, so you should make the partition table in the image file. For example:
2009 {\scriptsize {\tt \# losetup /dev/loop0 hd.img\\
2010 \# fdisk -b 512 -C 4096 -H 16 -S 32 /dev/loop0\\
2011 press 'n' to add new partition\\
2012 press 'p' to choose primary partition\\
2013 press '1' to set partition number\\
2014 press "Enter" keys to choose default value of "First Cylinder" parameter.\\
2015 press "Enter" keys to choose default value of "Last Cylinder" parameter.\\
2016 press 'w' to write partition table and exit\\
2017 \# losetup -d /dev/loop0}}
2019 \item {\bfseries Make the file system and install grub}\\
2020 {\scriptsize {\tt \# ln -s /dev/loop0 /dev/loop\\
2021 \# losetup /dev/loop0 hd.img\\
2022 \# losetup -o 16384 /dev/loop1 hd.img\\
2023 \# mkfs.ext3 /dev/loop1\\
2024 \# mount /dev/loop1 /mnt\\
2025 \# mkdir -p /mnt/boot/grub\\
2026 \# cp /boot/grub/stage* /boot/grub/e2fs\_stage1\_5 /mnt/boot/grub\\
2027 \# umount /mnt\\
2028 \# grub\\
2029 grub> device (hd0) /dev/loop\\
2030 grub> root (hd0,0)\\
2031 grub> setup (hd0)\\
2032 grub> quit\\
2033 \# rm /dev/loop\\
2034 \# losetup -d /dev/loop0\\
2035 \# losetup -d /dev/loop1}}
2037 The {\small {\tt losetup}} option {\small {\tt -o 16384}} skips the partition table in the image file. It is the number of sectors times 512. We need {\small {\tt /dev/loop}} because grub is expecting a disk device \emph{name}, where \emph{name} represents the entire disk and \emph{name1} represents the first partition.
2039 \item {\bfseries Copy the OS files to the image}\\
2040 If you have Xen installed, you can easily use {\small {\tt lomount}} instead of {\small {\tt losetup}} and {\small {\tt mount}} when coping files to some partitions. {\small {\tt lomount}} just needs the partition information.
2042 {\scriptsize {\tt \# lomount -t ext3 -diskimage hd.img -partition 1 /mnt/guest\\
2043 \# cp -ax /var/guestos/\{root,dev,var,etc,usr,bin,sbin,lib\} /mnt/guest\\
2044 \# mkdir /mnt/guest/\{proc,sys,home,tmp\}}}
2046 \item {\bfseries Edit the {\small {\tt /etc/fstab}} of the guest image}\\
2047 The fstab should look like this:
2049 {\scriptsize {\tt \# vim /mnt/guest/etc/fstab\\
2050 /dev/hda1 / ext3 defaults 1 1\\
2051 none /dev/pts devpts gid=5,mode=620 0 0\\
2052 none /dev/shm tmpfs defaults 0 0\\
2053 none /proc proc defaults 0 0\\
2054 none /sys sysfs efaults 0 0}}
2056 \item {\bfseries umount the image file}\\
2057 {\small {\tt \# umount /mnt/guest}}
2058 \end{enumerate}
2060 Now, the guest OS image {\small {\tt hd.img}} is ready. You can also reference {\small {\tt http://free.oszoo.org}} for quickstart images. But make sure to install the boot loader.
2062 \subsection{Install Windows into an Image File using a VMX guest}
2063 In order to install a Windows OS, you should keep {\small {\tt acpi=0}} in your VMX configuration file.
2065 \section{VMX Guests}
2066 \subsection{Editing the Xen VMX config file}
2067 Make a copy of the example VMX configuration file {\small {\tt /etc/xen/xmeaxmple.vmx}} and edit the line that reads
2069 {\small {\tt disk = [ 'file:/var/images/\emph{guest.img},ioemu:hda,w' ]}}
2071 replacing \emph{guest.img} with the name of the guest OS image file you just made.
2073 \subsection{Creating VMX guests}
2074 Simply follow the usual method of creating the guest, using the -f parameter and providing the filename of your VMX configuration file:\\
2076 {\small {\tt \# xend start\\
2077 \# xm create /etc/xen/vmxguest.vmx}}
2079 In the default configuration, VNC is on and SDL is off. Therefore VNC windows will open when VMX guests are created. If you want to use SDL to create VMX guests, set {\small {\tt sdl=1}} in your VMX configuration file. You can also turn off VNC by setting {\small {\tt vnc=0}}.
2081 \subsection{Destroy VMX guests}
2082 VMX guests can be destroyed in the same way as can paravirtualized guests. We recommend that you type the command
2084 {\small {\tt poweroff}}
2086 in the VMX guest's console first to prevent data loss. Then execute the command
2088 {\small {\tt xm destroy \emph{vmx\_guest\_id} }}
2090 at the Domain0 console.
2092 \subsection{VMX window (X or VNC) Hot Key}
2093 If you are running in the X environment after creating a VMX guest, an X window is created. There are several hot keys for control of the VMX guest that can be used in the window.
2095 {\bfseries Ctrl+Alt+2} switches from guest VGA window to the control window. Typing {\small {\tt help }} shows the control commands help. For example, 'q' is the command to destroy the VMX guest.\\
2096 {\bfseries Ctrl+Alt+1} switches back to VMX guest's VGA.\\
2097 {\bfseries Ctrl+Alt+3} switches to serial port output. It captures serial output from the VMX guest. It works only if the VMX guest was configured to use the serial port. \\
2099 \subsection{Save/Restore and Migration}
2100 VMX guests currently cannot be saved and restored, nor migrated. These features are currently under active development.
2102 %% Chapter Glossary of Terms moved to glossary.tex
2103 \chapter{Glossary of Terms}
2105 \begin{description}
2107 \item[BVT] The BVT scheduler is used to give proportional fair shares
2108 of the CPU to domains.
2110 \item[Domain] A domain is the execution context that contains a
2111 running {\bf virtual machine}. The relationship between virtual
2112 machines and domains on Xen is similar to that between programs and
2113 processes in an operating system: a virtual machine is a persistent
2114 entity that resides on disk (somewhat like a program). When it is
2115 loaded for execution, it runs in a domain. Each domain has a {\bf
2116 domain ID}.
2118 \item[Domain 0] The first domain to be started on a Xen machine.
2119 Domain 0 is responsible for managing the system.
2121 \item[Domain ID] A unique identifier for a {\bf domain}, analogous to
2122 a process ID in an operating system.
2124 \item[Full virtualization] An approach to virtualization which
2125 requires no modifications to the hosted operating system, providing
2126 the illusion of a complete system of real hardware devices.
2128 \item[Hypervisor] An alternative term for {\bf VMM}, used because it
2129 means `beyond supervisor', since it is responsible for managing
2130 multiple `supervisor' kernels.
2132 \item[Live migration] A technique for moving a running virtual machine
2133 to another physical host, without stopping it or the services
2134 running on it.
2136 \item[Paravirtualization] An approach to virtualization which requires
2137 modifications to the operating system in order to run in a virtual
2138 machine. Xen uses paravirtualization but preserves binary
2139 compatibility for user space applications.
2141 \item[Shadow pagetables] A technique for hiding the layout of machine
2142 memory from a virtual machine's operating system. Used in some {\bf
2143 VMMs} to provide the illusion of contiguous physical memory, in
2144 Xen this is used during {\bf live migration}.
2146 \item[Virtual Block Device] Persistant storage available to a virtual
2147 machine, providing the abstraction of an actual block storage device.
2148 {\bf VBD}s may be actual block devices, filesystem images, or
2149 remote/network storage.
2151 \item[Virtual Machine] The environment in which a hosted operating
2152 system runs, providing the abstraction of a dedicated machine. A
2153 virtual machine may be identical to the underlying hardware (as in
2154 {\bf full virtualization}, or it may differ, as in {\bf
2155 paravirtualization}).
2157 \item[VMM] Virtual Machine Monitor - the software that allows multiple
2158 virtual machines to be multiplexed on a single physical machine.
2160 \item[Xen] Xen is a paravirtualizing virtual machine monitor,
2161 developed primarily by the Systems Research Group at the University
2162 of Cambridge Computer Laboratory.
2164 \item[XenLinux] A name for the port of the Linux kernel that
2165 runs on Xen.
2167 \end{description}
2170 \end{document}
2173 %% Other stuff without a home
2175 %% Instructions Re Python API
2177 %% Other Control Tasks using Python
2178 %% ================================
2180 %% A Python module 'Xc' is installed as part of the tools-install
2181 %% process. This can be imported, and an 'xc object' instantiated, to
2182 %% provide access to privileged command operations:
2184 %% # import Xc
2185 %% # xc = Xc.new()
2186 %% # dir(xc)
2187 %% # help(xc.domain_create)
2189 %% In this way you can see that the class 'xc' contains useful
2190 %% documentation for you to consult.
2192 %% A further package of useful routines (xenctl) is also installed:
2194 %% # import xenctl.utils
2195 %% # help(xenctl.utils)
2197 %% You can use these modules to write your own custom scripts or you
2198 %% can customise the scripts supplied in the Xen distribution.
2202 % Explain about AGP GART
2205 %% If you're not intending to configure the new domain with an IP
2206 %% address on your LAN, then you'll probably want to use NAT. The
2207 %% 'xen_nat_enable' installs a few useful iptables rules into domain0
2208 %% to enable NAT. [NB: We plan to support RSIP in future]
2212 %% Installing the file systems from the CD
2213 %% =======================================
2215 %% If you haven't got an existing Linux installation onto which you
2216 %% can just drop down the Xen and Xenlinux images, then the file
2217 %% systems on the CD provide a quick way of doing an install. However,
2218 %% you would be better off in the long run doing a proper install of
2219 %% your preferred distro and installing Xen onto that, rather than
2220 %% just doing the hack described below:
2222 %% Choose one or two partitions, depending on whether you want a
2223 %% separate /usr or not. Make file systems on it/them e.g.:
2224 %% mkfs -t ext3 /dev/hda3
2225 %% [or mkfs -t ext2 /dev/hda3 && tune2fs -j /dev/hda3 if using an old
2226 %% version of mkfs]
2228 %% Next, mount the file system(s) e.g.:
2229 %% mkdir /mnt/root && mount /dev/hda3 /mnt/root
2230 %% [mkdir /mnt/usr && mount /dev/hda4 /mnt/usr]
2232 %% To install the root file system, simply untar /usr/XenDemoCD/root.tar.gz:
2233 %% cd /mnt/root && tar -zxpf /usr/XenDemoCD/root.tar.gz
2235 %% You'll need to edit /mnt/root/etc/fstab to reflect your file system
2236 %% configuration. Changing the password file (etc/shadow) is probably a
2237 %% good idea too.
2239 %% To install the usr file system, copy the file system from CD on
2240 %% /usr, though leaving out the "XenDemoCD" and "boot" directories:
2241 %% cd /usr && cp -a X11R6 etc java libexec root src bin dict kerberos
2242 %% local sbin tmp doc include lib man share /mnt/usr
2244 %% If you intend to boot off these file systems (i.e. use them for
2245 %% domain 0), then you probably want to copy the /usr/boot
2246 %% directory on the cd over the top of the current symlink to /boot
2247 %% on your root filesystem (after deleting the current symlink)
2248 %% i.e.:
2249 %% cd /mnt/root ; rm boot ; cp -a /usr/boot .