ia64/xen-unstable

changeset 2901:2f1e5bdb3088

bitkeeper revision 1.1159.1.382 (418ae2ebkptcd8gQwqwKwb3Kka2vyQ)

user manual additions
author iap10@labyrinth.cl.cam.ac.uk
date Fri Nov 05 02:18:19 2004 +0000 (2004-11-05)
parents 796eb5765fcc
children 7b0ac219fe22
files docs/src/user.tex
line diff
     1.1 --- a/docs/src/user.tex	Thu Nov 04 23:34:02 2004 +0000
     1.2 +++ b/docs/src/user.tex	Fri Nov 05 02:18:19 2004 +0000
     1.3 @@ -89,9 +89,11 @@ applications and libraries {\em do not} 
     1.4  
     1.5  Xen support is available for increasingly many operating systems:
     1.6  right now, Linux 2.4, Linux 2.6 and NetBSD are available for Xen 2.0.
     1.7 -We expect that Xen support will ultimately be integrated into the
     1.8 -releases of Linux, NetBSD and FreeBSD.  Other OS ports,
     1.9 -including Plan 9, are in progress.
    1.10 +A FreeBSD port is undergoing testing and will be incorporated into the
    1.11 +release soon. Other OS ports, including Plan 9, are in progress.  We
    1.12 +hope that that arch-xen patches will be incorporated into the
    1.13 +mainstream releases of these operating systems in due course (as has
    1.14 +already happened for NetBSD).
    1.15  
    1.16  Possible usage scenarios for Xen include:
    1.17  \begin{description}
    1.18 @@ -136,19 +138,20 @@ interface, either from a command-line to
    1.19  
    1.20  \section{Hardware Support}
    1.21  
    1.22 -Xen currently runs only on the x86 architecture,
    1.23 -requiring a `P6' or newer processor (e.g. Pentium Pro, Celeron,
    1.24 -Pentium II, Pentium III, Pentium IV, Xeon, AMD Athlon, AMD Duron).
    1.25 -Multiprocessor machines are supported, and we also have basic support
    1.26 -for HyperThreading (SMT), although this remains a topic for ongoing
    1.27 -research. A port specifically for x86/64 is in
    1.28 -progress, although Xen already runs on such systems in 32-bit legacy
    1.29 -mode. In addition a port to the IA64 architecture is approaching 
    1.30 -completion. 
    1.31 +Xen currently runs only on the x86 architecture, requiring a `P6' or
    1.32 +newer processor (e.g. Pentium Pro, Celeron, Pentium II, Pentium III,
    1.33 +Pentium IV, Xeon, AMD Athlon, AMD Duron).  Multiprocessor machines are
    1.34 +supported, and we also have basic support for HyperThreading (SMT),
    1.35 +although this remains a topic for ongoing research. A port
    1.36 +specifically for x86/64 is in progress, although Xen already runs on
    1.37 +such systems in 32-bit legacy mode. In addition a port to the IA64
    1.38 +architecture is approaching completion. We hope to add other
    1.39 +architectures such as PPC and ARM in due course.
    1.40 +
    1.41  
    1.42  Xen can currently use up to 4GB of memory.  It is possible for x86
    1.43  machines to address up to 64GB of physical memory but there are no
    1.44 -current plans to support these systems.  The x86/64 port is the
    1.45 +current plans to support these systems: The x86/64 port is the
    1.46  planned route to supporting larger memory sizes.
    1.47  
    1.48  Xen offloads most of the hardware support issues to the guest OS
    1.49 @@ -187,7 +190,7 @@ 2003\footnote{\tt
    1.50  http://www.cl.cam.ac.uk/netos/papers/2003-xensosp.pdf}, and the first
    1.51  public release (1.0) was made that October.  Since then, Xen has
    1.52  significantly matured and is now used in production scenarios on
    1.53 -multiple sites.
    1.54 +many sites.
    1.55  
    1.56  Xen 2.0 features greatly enhanced hardware support, configuration
    1.57  flexibility, usability and a larger complement of supported operating
    1.58 @@ -206,18 +209,13 @@ system distribution.
    1.59  \section{Prerequisites}
    1.60  \label{sec:prerequisites}
    1.61  
    1.62 -The following is a full list of prerequisites. Items marked `$*$' are
    1.63 -only required if you wish to build from source; items marked `$\dag$'
    1.64 -are only required if you wish to run more than one virtual machine.
    1.65 -
    1.66 +The following is a full list of prerequisites.  Items marked `$\dag$'
    1.67 +are required by the \xend control tools, and hence required if you
    1.68 +want to run more than one virtual machine; items marked `$*$' are only
    1.69 +required if you wish to build from source.
    1.70  \begin{itemize}
    1.71  \item A working Linux distribution using the GRUB bootloader and
    1.72  running on a P6-class (or newer) CPU.
    1.73 -\item [$*$] Build tools (gcc v3.2.x or v3.3.x, binutils, GNU make).
    1.74 -\item [$*$] Development installation of libcurl (e.g., libcurl-devel) 
    1.75 -\item [$*$] Development installation of zlib (e.g., zlib-dev).
    1.76 -\item [$*$] Development installation of Python v2.2 or later (e.g., python-dev).
    1.77 -\item [$*$] \LaTeX, transfig and tgif are required to build the documentation.
    1.78  \item [$\dag$] The \path{iproute2} package. 
    1.79  \item [$\dag$] The Linux bridge-utils\footnote{Available from 
    1.80  {\tt http://bridge.sourceforge.net}} (e.g., \path{/sbin/brctl})
    1.81 @@ -227,6 +225,11 @@ http://www.twistedmatrix.com}}. There ma
    1.82  available for your distribution; alternatively it can be installed by
    1.83  running `{\sl make install-twisted}' in the root of the Xen source
    1.84  tree.
    1.85 +\item [$*$] Build tools (gcc v3.2.x or v3.3.x, binutils, GNU make).
    1.86 +\item [$*$] Development installation of libcurl (e.g., libcurl-devel) 
    1.87 +\item [$*$] Development installation of zlib (e.g., zlib-dev).
    1.88 +\item [$*$] Development installation of Python v2.2 or later (e.g., python-dev).
    1.89 +\item [$*$] \LaTeX, transfig and tgif are required to build the documentation.
    1.90  \end{itemize}
    1.91  
    1.92  Once you have satisfied the relevant prerequisites, you can 
    1.93 @@ -362,9 +365,10 @@ KERNELS ?= mk.linux-2.6-xen0 mk.linux-2.
    1.94  \end{verbatim} 
    1.95  \end{quote} 
    1.96  
    1.97 -You can edit this line to include any set of operating system 
    1.98 -kernels which have configurations in the top-level 
    1.99 -\path{buildconfigs/} directory. 
   1.100 +You can edit this line to include any set of operating system kernels
   1.101 +which have configurations in the top-level \path{buildconfigs/}
   1.102 +directory, for example {\tt mk.linux-2.4-xenU} to build a Linux 2.4
   1.103 +kernel containing only virtual device drivers.
   1.104  
   1.105  %% Inspect the Makefile if you want to see what goes on during a build.
   1.106  %% Building Xen and the tools is straightforward, but XenLinux is more
   1.107 @@ -405,6 +409,8 @@ architecture being built for is \path{xe
   1.108  \begin{verbatim} 
   1.109  # cd linux-2.6.9-xen0 
   1.110  # make ARCH=xen xconfig 
   1.111 +# cd ..
   1.112 +# make
   1.113  \end{verbatim} 
   1.114  \end{quote} 
   1.115  
   1.116 @@ -412,7 +418,7 @@ You can also copy an existing Linux conf
   1.117  into \path{linux-2.6.9-xen0} and execute:  
   1.118  \begin{quote}
   1.119  \begin{verbatim} 
   1.120 -# make oldconfig 
   1.121 +# make ARCH=xen oldconfig 
   1.122  \end{verbatim} 
   1.123  \end{quote} 
   1.124  
   1.125 @@ -564,10 +570,11 @@ by restoring the directory to its origin
   1.126  The reason for this is that the current TLS implementation uses
   1.127  segmentation in a way that is not permissible under Xen.  If TLS is
   1.128  not disabled, an emulation mode is used within Xen which reduces
   1.129 -performance substantially and is not guaranteed to work perfectly.
   1.130 -
   1.131 -We hope that this issue can be resolved by working 
   1.132 -with Linux distribution vendors. 
   1.133 +performance substantially.
   1.134 +
   1.135 +We hope that this issue can be resolved by working with Linux
   1.136 +distribution vendors to implement a minor backward-compatible change
   1.137 +to the TLS library.
   1.138  
   1.139  \section{Booting Xen} 
   1.140  
   1.141 @@ -677,7 +684,7 @@ machine ID~1 you should type:
   1.142  
   1.143  \begin{quote}
   1.144  \begin{verbatim}
   1.145 -# xm create -c -f myvmconfig vmid=1
   1.146 +# xm create -c myvmconfig vmid=1
   1.147  \end{verbatim}
   1.148  \end{quote}
   1.149  
   1.150 @@ -708,7 +715,6 @@ section of the project's SourceForge sit
   1.151  kernel = "/boot/vmlinuz-2.6.9-xenU"
   1.152  memory = 64
   1.153  name = "ttylinux"
   1.154 -cpu = -1 # leave to Xen to pick
   1.155  nics = 1
   1.156  ip = "1.2.3.4"
   1.157  disk = ['file:/path/to/ttylinux/rootfs,sda1,w']
   1.158 @@ -716,7 +722,7 @@ root = "/dev/sda1 ro"
   1.159  \end{verbatim}
   1.160  \item Now start the domain and connect to its console:
   1.161  \begin{verbatim}
   1.162 -xm create -f configfile -c
   1.163 +xm create configfile -c
   1.164  \end{verbatim}
   1.165  \item Login as root, password root.
   1.166  \end{enumerate}
   1.167 @@ -842,6 +848,10 @@ or:
   1.168  \begin{verbatim}
   1.169  # xm console 5
   1.170  \end{verbatim}
   1.171 +or:
   1.172 +\begin{verbatim}
   1.173 +# xencons localhost 9605
   1.174 +\end{verbatim}
   1.175  
   1.176  \section{Domain Save and Restore}
   1.177  
   1.178 @@ -879,10 +889,12 @@ capacity) to accommodate the domain afte
   1.179  currently require both source and destination machines to be on the 
   1.180  same L2 subnet. 
   1.181  
   1.182 -Currently, there is no support for providing access to disk
   1.183 -filesystems when a domain is migrated.  Administrators should choose
   1.184 -an appropriate storage solution (i.e. SAN, NAS, etc.) to ensure that
   1.185 -domain filesystems are also available on their destination node.
   1.186 +Currently, there is no support for providing automatic remote access
   1.187 +to filesystems stored on local disk when a domain is migrated.
   1.188 +Administrators should choose an appropriate storage solution
   1.189 +(i.e. SAN, NAS, etc.) to ensure that domain filesystems are also
   1.190 +available on their destination node. GNBD is a good method for
   1.191 +exporting a volume from one machine to another, as is iSCSI.
   1.192  
   1.193  A domain may be migrated using the \path{xm migrate} command.  To
   1.194  live migrate a domain to another machine, we would use
   1.195 @@ -892,14 +904,12 @@ the command:
   1.196  # xm migrate --live mydomain destination.ournetwork.com
   1.197  \end{verbatim}
   1.198  
   1.199 -There will be a delay whilst the domain is moved to the destination
   1.200 -machine.  During this time, the Xen migration daemon copies as much
   1.201 -information as possible about the domain (configuration, memory
   1.202 -contents, etc.) to the destination host.  The domain is
   1.203 -then stopped for a fraction of a second in order to update the state
   1.204 -on the destination machine with any changes in memory contents, etc.
   1.205 -The domain will then continue on the new machine having been halted
   1.206 -for a fraction of a second (usually between about 60 -- 300ms).
   1.207 +Without the {\tt --live} flag, \xend simply stops the domain and
   1.208 +copies the memory image over to the new node and restarts it. Since
   1.209 +domains can have large allocations this can be quite time consuming,
   1.210 +even on a Gigabit network. With the {\tt --live} flag \xend attempts
   1.211 +to keep the domain running while the migration is in progress,
   1.212 +resulting in typical 'downtimes' of just 60 -- 300ms.
   1.213  
   1.214  For now it will be necessary to reconnect to the domain's console on
   1.215  the new machine using the \path{xm console} command.  If a migrated
   1.216 @@ -974,27 +984,38 @@ configuration file. For example a line l
   1.217  \verb_disk = ['phy:hda3,sda1,w']_
   1.218  \end{quote}
   1.219  specifies that the partition \path{/dev/hda3} in domain 0 
   1.220 -should be exported to the new domain as \path{/dev/sda1}; 
   1.221 -one could equally well export it as \path{/dev/hda3} or 
   1.222 +should be exported read-write to the new domain as \path{/dev/sda1}; 
   1.223 +one could equally well export it as \path{/dev/hda} or 
   1.224  \path{/dev/sdb5} should one wish. 
   1.225  
   1.226  In addition to local disks and partitions, it is possible to export
   1.227  any device that Linux considers to be ``a disk'' in the same manner.
   1.228  For example, if you have iSCSI disks or GNBD volumes imported into
   1.229  domain 0 you can export these to other domains using the \path{phy:}
   1.230 -disk syntax.
   1.231 +disk syntax. E.g.:
   1.232 +\begin{quote}
   1.233 +\verb_disk = ['phy:vg/lvm1,sda2,w']_
   1.234 +\end{quote}
   1.235 +
   1.236  
   1.237  
   1.238  \begin{center}
   1.239  \framebox{\bf Warning: Block device sharing}
   1.240  \end{center}
   1.241  \begin{quote}
   1.242 -Block devices should only be shared between domains in a read-only
   1.243 -fashion otherwise the Linux kernels will obviously get very confused
   1.244 -as the file system structure may change underneath them (having the
   1.245 -same partition mounted rw twice is a sure fire way to cause
   1.246 -irreparable damage)!  If you want read-write sharing, export the
   1.247 -directory to other domains via NFS from domain0. 
   1.248 +Block devices should typically only be shared between domains in a
   1.249 +read-only fashion otherwise the Linux kernel's file systems will get
   1.250 +very confused as the file system structure may change underneath them
   1.251 +(having the same ext3 partition mounted rw twice is a sure fire way to
   1.252 +cause irreparable damage)!  \xend will attempt to prevent you from
   1.253 +doing this by checking that the device is not mounted read-write in
   1.254 +domain 0, and hasn't already been exported read-write to another
   1.255 +domain.
   1.256 +
   1.257 +If you want read-write sharing, export the directory to other domains
   1.258 +via NFS from domain0 (or use a cluster file system such as GFS or
   1.259 +ocfs2).
   1.260 +
   1.261  \end{quote}
   1.262  
   1.263  
   1.264 @@ -1132,11 +1153,11 @@ rather confused. It may be possible to a
   1.265  process by using \path{dmsetup wait} to spot the volume getting full
   1.266  and then issue an \path{lvextend}.
   1.267  
   1.268 -%% In principle, it is possible to continue writing to the volume
   1.269 -%% that has been cloned (the changes will not be visible to the
   1.270 -%% clones), but we wouldn't recommend this: have the cloned volume
   1.271 -%% as a 'pristine' file system install that isn't mounted directly
   1.272 -%% by any of the virtual machines.
   1.273 +In principle, it is possible to continue writing to the volume
   1.274 +that has been cloned (the changes will not be visible to the
   1.275 +clones), but we wouldn't recommend this: have the cloned volume
   1.276 +as a 'pristine' file system install that isn't mounted directly
   1.277 +by any of the virtual machines.
   1.278  
   1.279  
   1.280  \section{Using NFS Root}
   1.281 @@ -1150,7 +1171,7 @@ network by adding a line to \path{/etc/e
   1.282  
   1.283  \begin{quote}
   1.284  \begin{verbatim}
   1.285 -/export/vm1root      w.x.y.z/m (rw,sync,no_root_squash)
   1.286 +/export/vm1root      1.2.3.4/24 (rw,sync,no_root_squash)
   1.287  \end{verbatim}
   1.288  \end{quote}
   1.289  
   1.290 @@ -1162,7 +1183,7 @@ the domain's configuration file:
   1.291  \begin{small}
   1.292  \begin{verbatim}
   1.293  root       = '/dev/nfs'
   1.294 -nfs_server = 'a.b.c.d'       # substitute IP address of server 
   1.295 +nfs_server = '2.3.4.5'       # substitute IP address of server 
   1.296  nfs_root   = '/path/to/root' # path to root FS on the server
   1.297  \end{verbatim}
   1.298  \end{small}
   1.299 @@ -1215,6 +1236,10 @@ Once \xend is running, more sophisticate
   1.300  using the xm tool (see Section~\ref{s:xm}) and the experimental
   1.301  Xensv web interface (see Section~\ref{s:xensv}).
   1.302  
   1.303 +As \xend runs, events will be logged to {\tt /var/log/xend.log} and
   1.304 +{\tt /var/log/xfrd.log}, and these may be useful for troubleshooting
   1.305 +problems.
   1.306 +
   1.307  \section{Xm (command line interface)}
   1.308  \label{s:xm}
   1.309  
   1.310 @@ -1765,7 +1790,7 @@ directory of the Xen source distribution
   1.311  
   1.312  The official Xen web site is found at:
   1.313  \begin{quote}
   1.314 -{\tt http://www.cl.cam.ac.uk/Research/SRG/netos/xen/}
   1.315 +{\tt http://www.cl.cam.ac.uk/netos/xen/}
   1.316  \end{quote}
   1.317  
   1.318  This contains links to the latest versions of all on-line