ia64/xen-unstable

changeset 8230:bc7567741a4c

Incorporating Alan Oehler's changes, some of mine.
author kmself@ix.netcom.com
date Fri Dec 02 19:59:53 2005 -0700 (2005-12-02)
parents 1deae55b1f5c
children 8098cc1daac4
files docs/src/user/booting_xen.tex docs/src/user/cpu_management.tex docs/src/user/domain_filesystem.tex docs/src/user/glossary.tex docs/src/user/installation.tex docs/src/user/introduction.tex docs/src/user/memory_management.tex
line diff
     1.1 --- a/docs/src/user/booting_xen.tex	Fri Dec 02 17:13:55 2005 -0700
     1.2 +++ b/docs/src/user/booting_xen.tex	Fri Dec 02 19:59:53 2005 -0700
     1.3 @@ -1,3 +1,170 @@
     1.4  \chapter{Booting Xen}
     1.5  
     1.6 -Placeholder.
     1.7 +Once Xen is installed and configured as described in the preceding chapter, it
     1.8 +should now be possible to restart the system and use Xen.
     1.9 +
    1.10 +Booting the system into Xen will bring you up into the privileged management d
    1.11 +omain, Domain0. At that point you are ready to create guest domains and "boot" t
    1.12 +hem using the xm create command.
    1.13 +
    1.14 +\section{Booting Domain0}
    1.15 +
    1.16 +After installation and configuration is complete, reboot the system and and ch
    1.17 +oose the new Xen option when the Grub screen appears.
    1.18 +
    1.19 +What follows should look much like a conventional Linux boot.  The
    1.20 +first portion of the output comes from Xen itself, supplying low level
    1.21 +information about itself and the underlying hardware.  The last
    1.22 +portion of the output comes from XenLinux.
    1.23 +
    1.24 +You may see some errors during the XenLinux boot.  These are not
    1.25 +necessarily anything to worry about --- they may result from kernel
    1.26 +configuration differences between your XenLinux kernel and the one you
    1.27 +usually use.
    1.28 +
    1.29 +%% KMSelf Wed Nov 30 18:09:37 PST 2005:  We should specify what these are.
    1.30 +
    1.31 +When the boot completes, you should be able to log into your system as
    1.32 +usual.  If you are unable to log in, you should still be able to
    1.33 +reboot with your normal Linux kernel by selecting it at the GRUB prompt.
    1.34 +
    1.35 +The first step in creating a new domain is to prepare a root
    1.36 +filesystem for it to boot.  Typically, this might be stored in a normal
    1.37 +partition, an LVM or other volume manager partition, a disk file or on
    1.38 +an NFS server.  A simple way to do this is simply to boot from your
    1.39 +standard OS install CD and install the distribution into another
    1.40 +partition on your hard drive.
    1.41 +
    1.42 +To start the \xend\ control daemon, type
    1.43 +\begin{quote}
    1.44 +  \verb!# xend start!
    1.45 +\end{quote}
    1.46 +
    1.47 +If you wish the daemon to start automatically, see the instructions in
    1.48 +Section~\ref{s:xend}. Once the daemon is running, you can use the
    1.49 +\path{xm} tool to monitor and maintain the domains running on your
    1.50 +system. This chapter provides only a brief tutorial. We provide full
    1.51 +details of the \path{xm} tool in the next chapter.
    1.52 +
    1.53 +% \section{From the web interface}
    1.54 +%
    1.55 +% Boot the Xen machine and start Xensv (see Chapter~\ref{cha:xensv}
    1.56 +% for more details) using the command: \\
    1.57 +% \verb_# xensv start_ \\
    1.58 +% This will also start Xend (see Chapter~\ref{cha:xend} for more
    1.59 +% information).
    1.60 +%
    1.61 +% The domain management interface will then be available at {\tt
    1.62 +%   http://your\_machine:8080/}.  This provides a user friendly wizard
    1.63 +% for starting domains and functions for managing running domains.
    1.64 +%
    1.65 +% \section{From the command line}
    1.66 +\section{Booting Guest Domains}
    1.67 +
    1.68 +\subsection{Creating a Domain Configuration File}
    1.69 +
    1.70 +Before you can start an additional domain, you must create a
    1.71 +configuration file. We provide two example files which you can use as
    1.72 +a starting point:
    1.73 +\begin{itemize}
    1.74 +\item \path{/etc/xen/xmexample1} is a simple template configuration
    1.75 +  file for describing a single VM\@.
    1.76 +\item \path{/etc/xen/xmexample2} file is a template description that
    1.77 +  is intended to be reused for multiple virtual machines.  Setting the
    1.78 +  value of the \path{vmid} variable on the \path{xm} command line
    1.79 +  fills in parts of this template.
    1.80 +\end{itemize}
    1.81 +
    1.82 +Copy one of these files and edit it as appropriate.  Typical values
    1.83 +you may wish to edit include:
    1.84 +
    1.85 +\begin{quote}
    1.86 +\begin{description}
    1.87 +\item[kernel] Set this to the path of the kernel you compiled for use
    1.88 +  with Xen (e.g.\ \path{kernel = ``/boot/vmlinuz-2.6-xenU''})
    1.89 +\item[memory] Set this to the size of the domain's memory in megabytes
    1.90 +  (e.g.\ \path{memory = 64})
    1.91 +\item[disk] Set the first entry in this list to calculate the offset
    1.92 +  of the domain's root partition, based on the domain ID\@.  Set the
    1.93 +  second to the location of \path{/usr} if you are sharing it between
    1.94 +  domains (e.g.\ \path{disk = ['phy:your\_hard\_drive\%d,sda1,w' \%
    1.95 +    (base\_partition\_number + vmid),
    1.96 +    'phy:your\_usr\_partition,sda6,r' ]}
    1.97 +\item[dhcp] Uncomment the dhcp variable, so that the domain will
    1.98 +  receive its IP address from a DHCP server (e.g.\ \path{dhcp=``dhcp''})
    1.99 +\end{description}
   1.100 +\end{quote}
   1.101 +
   1.102 +You may also want to edit the {\bf vif} variable in order to choose
   1.103 +the MAC address of the virtual ethernet interface yourself.  For
   1.104 +example:
   1.105 +
   1.106 +\begin{quote}
   1.107 +\verb_vif = ['mac=00:16:3E:F6:BB:B3']_
   1.108 +\end{quote}
   1.109 +If you do not set this variable, \xend\ will automatically generate a
   1.110 +random MAC address from the range 00:16:3E:xx:xx:xx, assigned by IEEE to
   1.111 +XenSource as an OUI (organizationally unique identifier).  XenSource
   1.112 +Inc. gives permission for anyone to use addresses randomly allocated
   1.113 +from this range for use by their Xen domains.
   1.114 +
   1.115 +For a list of IEEE OUI assignments, see \newline
   1.116 +{\tt http://standards.ieee.org/regauth/oui/oui.txt}.
   1.117 +
   1.118 +
   1.119 +\subsection{Booting the Guest Domain}
   1.120 +
   1.121 +The \path{xm} tool provides a variety of commands for managing
   1.122 +domains.  Use the \path{create} command to start new domains. Assuming
   1.123 +you've created a configuration file \path{myvmconf} based around
   1.124 +\path{/etc/xen/xmexample2}, to start a domain with virtual machine
   1.125 +ID~1 you should type:
   1.126 +
   1.127 +\begin{quote}
   1.128 +\begin{verbatim}
   1.129 +# xm create -c myvmconf vmid=1
   1.130 +\end{verbatim}
   1.131 +\end{quote}
   1.132 +
   1.133 +The \path{-c} switch causes \path{xm} to turn into the domain's
   1.134 +console after creation.  The \path{vmid=1} sets the \path{vmid}
   1.135 +variable used in the \path{myvmconf} file.
   1.136 +
   1.137 +You should see the console boot messages from the new domain appearing
   1.138 +in the terminal in which you typed the command, culminating in a login
   1.139 +prompt.
   1.140 +
   1.141 +\subsection{Example: ttylinux}
   1.142 +
   1.143 +Ttylinux is a very small Linux distribution, designed to require very
   1.144 +few resources.  We will use it as a concrete example of how to start a
   1.145 +Xen domain.  Most users will probably want to install a full-featured
   1.146 +distribution once they have mastered the basics\footnote{ttylinux is
   1.147 +  the distribution's home page: {\tt
   1.148 +    http://www.minimalinux.org/ttylinux/}}.
   1.149 +
   1.150 +\begin{enumerate}
   1.151 +\item Download and extract the ttylinux disk image from the Files
   1.152 +  section of the project's SourceForge site (see
   1.153 +  \path{http://sf.net/projects/xen/}).
   1.154 +\item Create a configuration file like the following:
   1.155 +  \begin{quote}
   1.156 +\begin{verbatim}
   1.157 +kernel = "/boot/vmlinuz-2.6-xenU"
   1.158 +memory = 64
   1.159 +name = "ttylinux"
   1.160 +nics = 1
   1.161 +ip = "1.2.3.4"
   1.162 +disk = ['file:/path/to/ttylinux/rootfs,sda1,w']
   1.163 +root = "/dev/sda1 ro"
   1.164 +\end{verbatim}    
   1.165 +  \end{quote}
   1.166 +\item Now start the domain and connect to its console:
   1.167 +  \begin{quote}
   1.168 +\begin{verbatim}
   1.169 +xm create configfile -c
   1.170 +\end{verbatim}
   1.171 +  \end{quote}
   1.172 +\item Login as root, password root.
   1.173 +\end{enumerate}
   1.174 +
     2.1 --- a/docs/src/user/cpu_management.tex	Fri Dec 02 17:13:55 2005 -0700
     2.2 +++ b/docs/src/user/cpu_management.tex	Fri Dec 02 19:59:53 2005 -0700
     2.3 @@ -1,3 +1,44 @@
     2.4  \chapter{CPU Management}
     2.5  
     2.6  Placeholder.
     2.7 +%% KMS Something sage about CPU / processor management.
     2.8 +
     2.9 +Xen allows a domain's virtual CPU(s) to be associated with one or more
    2.10 +host CPUs.  This can be used to allocate real resources among one or
    2.11 +more guests, or to make optimal use of processor resources when
    2.12 +utilizing dual-core, hyperthreading, or other advanced CPU technologies.
    2.13 +
    2.14 +Xen enumerates physical CPUs in a `depth first' fashion.  For a system
    2.15 +with both hyperthreading and multiple cores, this would be all the
    2.16 +hyperthreads on a given core, then all the cores on a given socket,
    2.17 +and then all sockets.  I.e.  if you had a two socket, dual core,
    2.18 +hyperthreaded Xeon the CPU order would be:
    2.19 +
    2.20 +
    2.21 +\begin{center}
    2.22 +\begin{tabular}{|l|l|l|l|l|l|l|r|}
    2.23 +\multicolumn{4}{c|}{socket0}     &  \multicolumn{4}{c|}{socket1} \\ \hline
    2.24 +\multicolumn{2}{c|}{core0}  &  \multicolumn{2}{c|}{core1}  &
    2.25 +\multicolumn{2}{c|}{core0}  &  \multicolumn{2}{c|}{core1} \\ \hline
    2.26 +ht0 & ht1 & ht0 & ht1 & ht0 & ht1 & ht0 & ht1 \\
    2.27 +\#0 & \#1 & \#2 & \#3 & \#4 & \#5 & \#6 & \#7 \\
    2.28 +\end{tabular}
    2.29 +\end{center}
    2.30 +
    2.31 +
    2.32 +Having multiple vcpus belonging to the same domain mapped to the same
    2.33 +physical CPU is very likely to lead to poor performance. It's better to
    2.34 +use `vcpus-set' to hot-unplug one of the vcpus and ensure the others are
    2.35 +pinned on different CPUs.
    2.36 +
    2.37 +If you are running IO intensive tasks, its typically better to dedicate
    2.38 +either a hyperthread or whole core to running domain 0, and hence pin
    2.39 +other domains so that they can't use CPU 0. If your workload is mostly
    2.40 +compute intensive, you may want to pin vcpus such that all physical CPU
    2.41 +threads are available for guest domains.
    2.42 +
    2.43 +
    2.44 +\section{Setting CPU Pinning}
    2.45 +
    2.46 +FIXME:  To specify a domain's CPU pinning use the XXX command/syntax in
    2.47 +XXX.
     3.1 --- a/docs/src/user/domain_filesystem.tex	Fri Dec 02 17:13:55 2005 -0700
     3.2 +++ b/docs/src/user/domain_filesystem.tex	Fri Dec 02 19:59:53 2005 -0700
     3.3 @@ -1,9 +1,17 @@
     3.4  \chapter{Storage and File System Management}
     3.5  
     3.6 -It is possible to directly export any Linux block device in dom0 to
     3.7 -another domain, or to export filesystems / devices to virtual machines
     3.8 -using standard network protocols (e.g.\ NBD, iSCSI, NFS, etc.).  This
     3.9 -chapter covers some of the possibilities.
    3.10 +Storage can be made available to virtual machines in a number of
    3.11 +different ways.  This chapter covers some possible configurations.
    3.12 +
    3.13 +The most straightforward method is to export a physical block device (a
    3.14 +hard drive or partition) from dom0 directly to the guest domain as a
    3.15 +virtual block device (VBD).
    3.16 +
    3.17 +Storage may also be exported from a filesystem image or a partitioned
    3.18 +filesystem image as a \emph{file-backed VBD}.
    3.19 +
    3.20 +Finally, standard network storage protocols such as NBD, iSCSI, NFS,
    3.21 +etc., can be used to provide storage to virtual machines.
    3.22  
    3.23  
    3.24  \section{Exporting Physical Devices as VBDs}
     4.1 --- a/docs/src/user/glossary.tex	Fri Dec 02 17:13:55 2005 -0700
     4.2 +++ b/docs/src/user/glossary.tex	Fri Dec 02 19:59:53 2005 -0700
     4.3 @@ -57,14 +57,19 @@
     4.4  
     4.5  \item[Shadow pagetables] A technique for hiding the layout of machine
     4.6    memory from a virtual machine's operating system.  Used in some {\bf
     4.7 -    VMMs} to provide the illusion of contiguous physical memory, in
     4.8 +  VMMs} to provide the illusion of contiguous physical memory, in
     4.9    Xen this is used during {\bf live migration}.
    4.10  
    4.11 +\item[Virtual Block Device] Persistant storage available to a virtual
    4.12 +  machine, providing the abstraction of an actual block storage device.
    4.13 +  {\bf VBD}s may be actual block devices, filesystem images, or
    4.14 +  remote/network storage.
    4.15 +
    4.16  \item[Virtual Machine] The environment in which a hosted operating
    4.17    system runs, providing the abstraction of a dedicated machine.  A
    4.18    virtual machine may be identical to the underlying hardware (as in
    4.19    {\bf full virtualisation}, or it may differ, as in {\bf
    4.20 -    paravirtualisation}).
    4.21 +  paravirtualisation}).
    4.22  
    4.23  \item[VMM] Virtual Machine Monitor - the software that allows multiple
    4.24    virtual machines to be multiplexed on a single physical machine.
     5.1 --- a/docs/src/user/installation.tex	Fri Dec 02 17:13:55 2005 -0700
     5.2 +++ b/docs/src/user/installation.tex	Fri Dec 02 19:59:53 2005 -0700
     5.3 @@ -35,10 +35,9 @@ if you wish to build from source.
     5.4  Once you have satisfied these prerequisites, you can now install either
     5.5  a binary or source distribution of Xen.
     5.6  
     5.7 -
     5.8  \section{Installing from Binary Tarball}
     5.9  
    5.10 -Pre-built tarballs are available for download from the Xen download
    5.11 +Pre-built tarballs are available for download from the XenSource downloads
    5.12  page:
    5.13  \begin{quote} {\tt http://www.xensource.com/downloads/}
    5.14  \end{quote}
    5.15 @@ -53,7 +52,22 @@ Once you've downloaded the tarball, simp
    5.16  Once you've installed the binaries you need to configure your system as
    5.17  described in Section~\ref{s:configure}.
    5.18  
    5.19 +\section{Installing from RPMs}
    5.20 +Pre-built RPMs are available for download from the XenSource downloads
    5.21 +page:
    5.22 +\begin{quote} {\tt http://www.xensource.com/downloads/}
    5.23 +\end{quote}
    5.24  
    5.25 +Once you've downloaded the RPMs, you typically install them via the RPM commands:
    5.26 +\begin{verbatim}
    5.27 +# rpm -ivh \emph{rpmname}
    5.28 +\end{verbatim}
    5.29 +
    5.30 +See the instructions and the Release Notes for each RPM set referenced at:
    5.31 +  \begin{quote}
    5.32 +    {\tt http://www.xensource.com/downloads/}.
    5.33 +  \end{quote}
    5.34 + 
    5.35  \section{Installing from Source}
    5.36  
    5.37  This section describes how to obtain, build and install Xen from source.
    5.38 @@ -88,9 +102,9 @@ or as a clone of our master Mercurial re
    5.39  % \item[\path{tools/}] Xen node controller daemon (Xend), command line
    5.40  %   tools, control libraries
    5.41  % \item[\path{xen/}] The Xen VMM.
    5.42 +% \item[\path{buildconfigs/}] Build configuration files
    5.43  % \item[\path{linux-*-xen-sparse/}] Xen support for Linux.
    5.44 -% \item[\path{linux-*-patches/}] Experimental patches for Linux.
    5.45 -% \item[\path{netbsd-*-xen-sparse/}] Xen support for NetBSD.
    5.46 +% \item[\path{patches/}] Experimental patches for Linux.
    5.47  % \item[\path{docs/}] Various documentation files for users and
    5.48  %   developers.
    5.49  % \item[\path{extras/}] Bonus extras.
    5.50 @@ -221,7 +235,7 @@ destinations.
    5.51  
    5.52  %% Files in \path{install/boot/} include:
    5.53  %% \begin{itemize}
    5.54 -%% \item \path{install/boot/xen-2.0.gz} Link to the Xen 'kernel'
    5.55 +%% \item \path{install/boot/xen-3.0.gz} Link to the Xen 'kernel'
    5.56  %% \item \path{install/boot/vmlinuz-2.6-xen0} Link to domain 0
    5.57  %%   XenLinux kernel
    5.58  %% \item \path{install/boot/vmlinuz-2.6-xenU} Link to unprivileged
    5.59 @@ -249,10 +263,12 @@ An entry should be added to \path{grub.c
    5.60  This file is sometimes called \path{menu.lst}, depending on your
    5.61  distribution. The entry should look something like the following:
    5.62  
    5.63 +%% KMSelf Thu Dec  1 19:06:13 PST 2005 262144 is useful for RHEL/RH and
    5.64 +%% related Dom0s.
    5.65  {\small
    5.66  \begin{verbatim}
    5.67  title Xen 3.0 / XenLinux 2.6
    5.68 -  kernel /boot/xen-3.0.gz dom0_mem=131072
    5.69 +  kernel /boot/xen-3.0.gz dom0_mem=262144
    5.70    module /boot/vmlinuz-2.6-xen0 root=/dev/sda4 ro console=tty0
    5.71  \end{verbatim}
    5.72  }
    5.73 @@ -265,7 +281,7 @@ Section~\ref{s:xboot}.
    5.74  
    5.75  The module line of the configuration describes the location of the
    5.76  XenLinux kernel that Xen should start and the parameters that should be
    5.77 -passed to it. Tthese are standard Linux parameters, identifying the root
    5.78 +passed to it. These are standard Linux parameters, identifying the root
    5.79  device and specifying it be initially mounted read only and instructing
    5.80  that console output be sent to the screen. Some distributions such as
    5.81  SuSE do not require the \path{ro} parameter.
    5.82 @@ -276,19 +292,82 @@ SuSE do not require the \path{ro} parame
    5.83  %%     kernel command line, since the partition won't be remounted rw
    5.84  %%     during boot.  }}
    5.85  
    5.86 -If you want to use an initrd, just add another \path{module} line to the
    5.87 -configuration, like: {\small
    5.88 +To use an initrd, add another \path{module} line to the configuration,
    5.89 +like: {\small
    5.90  \begin{verbatim}
    5.91    module /boot/my_initrd.gz
    5.92  \end{verbatim}
    5.93  }
    5.94  
    5.95 +%% KMSelf Thu Dec  1 19:05:30 PST 2005 Other configs as an appendix?
    5.96 +
    5.97  When installing a new kernel, it is recommended that you do not delete
    5.98  existing menu options from \path{menu.lst}, as you may wish to boot your
    5.99  old Linux kernel in future, particularly if you have problems.
   5.100  
   5.101  \subsection{Serial Console (optional)}
   5.102  
   5.103 +Serial console access allows you to manage, monitor, and interact with
   5.104 +your system over a serial console.  This can allow access from another
   5.105 +nearby system via a null-modem ("LapLink") cable, remotely via a serial
   5.106 +concentrator, or for debugging an emulator such as Qemu.
   5.107 +
   5.108 +You system's BIOS, bootloader (GRUB), Xen, Linux, and login access must
   5.109 +each be individually configured for serial console access.  It is
   5.110 +\emph{not} strictly necessary to have each component fully functional,
   5.111 +but it can be quite useful.
   5.112 +
   5.113 +For general information on serial console configuration under Linux,
   5.114 +refer to the ``Remote Serial Console HOWTO'' at The Linux Documentation
   5.115 +Project:  {\tt http://www.tldp.org}.
   5.116 +
   5.117 +\subsubsection{Serial Console BIOS configuration}
   5.118 +
   5.119 +Enabling system serial console output neither enables nor disables
   5.120 +serial capabilities in GRUB, Xen, or Linux, but may make remote
   5.121 +management of your system more convenient by displaying POST and other
   5.122 +boot messages over serial port and allowing remote BIOS configuration.
   5.123 +
   5.124 +Refer to your hardware vendor's documentation for capabilities and
   5.125 +procedures to enable BIOS serial redirection.
   5.126 +
   5.127 +
   5.128 +\subsubsection{Serial Console GRUB configuration}
   5.129 +
   5.130 +Placeholder
   5.131 +
   5.132 +Enabling GRUB serial console output neither enables nor disables Xen or
   5.133 +Linux serial capabilities, but may made remote management of your system
   5.134 +more convenient by displaying GRUB prompts, menus, and actions over
   5.135 +serial port and allowing remote GRUB management.
   5.136 +
   5.137 +Adding the following two lines to your GRUB configuration file,
   5.138 +typically \path{/boot/grub/menu.lst} or \path{/boot/grub/grub.conf}
   5.139 +depending on your distro, will enable GRUB serial output.
   5.140 +
   5.141 +\begin{quote} {\small \begin{verbatim}
   5.142 +  serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1
   5.143 +  terminal --timeout=10 serial console
   5.144 +\end{verbatim}}
   5.145 +\end{quote}
   5.146 +
   5.147 +Note that when both the serial port and the local monitor and keyboard
   5.148 +are enabled, the text "Press any key to continue." will appear at both.
   5.149 +Pressing a key on one device will cause GRUB to display to that device.
   5.150 +The other device will see no output.  If no key is pressed before the
   5.151 +timeout period expires, the system will boot to the default GRUB boot
   5.152 +entry.
   5.153 +
   5.154 +Please refer to the GRUB info documentation for further information.
   5.155 +
   5.156 +
   5.157 +\subsubsection{Serial Console Xen configuration}
   5.158 +
   5.159 +Enabling Xen serial console output neither enables nor disables Linux
   5.160 +kernel output or logging in to Linux over serial port.  It does however
   5.161 +allow you to monitor and log the Xen boot process via serial console and
   5.162 +can be very useful in debugging.
   5.163 +
   5.164  %% kernel /boot/xen-2.0.gz dom0_mem=131072 com1=115200,8n1
   5.165  %% module /boot/vmlinuz-2.6-xen0 root=/dev/sda4 ro
   5.166  
   5.167 @@ -306,14 +385,44 @@ stop bit and no parity. Modify these par
   5.168  One can also configure XenLinux to share the serial console; to achieve
   5.169  this append ``\path{console=ttyS0}'' to your module line.
   5.170  
   5.171 -If you wish to be able to log in over the XenLinux serial console it is
   5.172 -necessary to add a line into \path{/etc/inittab}. Add the line:
   5.173 -\begin{quote} {\small {\tt c:2345:respawn:/sbin/mingetty ttyS0}}
   5.174 +
   5.175 +\subsubsection{Serial Console Linux configuration}
   5.176 +
   5.177 +Enabling Linux serial console output at boot neither enables nor
   5.178 +disables logging in to Linux over serial port.  It does however allow
   5.179 +you to monitor and log the Linux boot process via serial console and can be
   5.180 +very useful in debugging.
   5.181 +
   5.182 +To enable Linux output at boot time, add the parameter
   5.183 +\path{console=ttyS0} (or ttyS1, ttyS2, etc.) to your kernel GRUB line.
   5.184 +Under Xen, this might be:
   5.185 +\begin{quote} {\small \begin{verbatim}
   5.186 +  module /vmlinuz-2.6-xen0 ro root=/dev/VolGroup00/LogVol00 console=ttyS0, 115200
   5.187 +\end{verbatim}}
   5.188  \end{quote}
   5.189 +to enable output over ttyS0 at 115200 baud.
   5.190  
   5.191 -and you should be able to log in. To successfully log in as root over
   5.192 -the serial line will require adding \path{ttyS0} to
   5.193 -\path{/etc/securetty} if it is not already there.
   5.194 +
   5.195 +
   5.196 +\subsubsection{Serial Console Login configuration}
   5.197 +
   5.198 +Logging in to Linux via serial console, under Xen or otherwise, requires
   5.199 +specifying a login prompt be started on the serial port.  To permit root
   5.200 +logins over serial console, the serial port must be added to
   5.201 +\path{/etc/securetty}.
   5.202 +
   5.203 +To automatically start a login prompt over serial port, 
   5.204 +Add the line: \begin{quote} {\small {\tt c:2345:respawn:/sbin/mingetty
   5.205 +ttyS0}} \end{quote} to \path{/etc/inittab}.   Run \path{init q} to force
   5.206 +a reload of your inttab and start getty.
   5.207 +
   5.208 +To enable root logins, add \path{ttyS0} to \path{/etc/securetty} if not
   5.209 +already present.
   5.210 +
   5.211 +Your distribution may use an alternate getty, options include getty,
   5.212 +mgetty, agetty, and others.  Consult your distribution's documentation
   5.213 +for further information.
   5.214 +
   5.215  
   5.216  \subsection{TLS Libraries}
   5.217  
   5.218 @@ -353,4 +462,4 @@ usually use.
   5.219  
   5.220  When the boot completes, you should be able to log into your system as
   5.221  usual. If you are unable to log in, you should still be able to reboot
   5.222 -with your normal Linux kernel.
   5.223 +with your normal Linux kernel by selecting it at the GRUB prompt.
     6.1 --- a/docs/src/user/introduction.tex	Fri Dec 02 17:13:55 2005 -0700
     6.2 +++ b/docs/src/user/introduction.tex	Fri Dec 02 19:59:53 2005 -0700
     6.3 @@ -8,7 +8,7 @@ close-to-native performance. The virtual
     6.4  enterprise-grade functionality, including:
     6.5  
     6.6  \begin{itemize}
     6.7 -\item Virtual machines with performance close to native hardware.
     6.8 +\item Virtual machines with performance typically 94-98\% of native hardware.
     6.9  \item Live migration of running virtual machines between physical hosts.
    6.10  \item Excellent hardware support. Supports most Linux device drivers.
    6.11  \item Sand-boxed, re-startable device drivers.
    6.12 @@ -26,13 +26,26 @@ underlying native hardware. Even though 
    6.13  explicitly support Xen, a key feature is that user space applications
    6.14  and libraries \emph{do not} require modification.
    6.15  
    6.16 -Xen support is available for increasingly many operating systems: right
    6.17 -now, Linux and NetBSD are available for Xen 3.0. A FreeBSD port is
    6.18 +With hardware CPU virtualization as provided by Intel VT and AMD
    6.19 +Pacifica technology, the ability to run an unmodified guest OS kernel is
    6.20 +available.  No porting of the OS is required; however some additional
    6.21 +driver support is necessary within Xen itself.  Unlike traditional full
    6.22 +virtualization hypervisors, which suffer a tremendous performance
    6.23 +overhead, the combination of Xen and VT or Xen and Pacifica technology
    6.24 +complement one another to offer superb performance for para-virtualized
    6.25 +guest operating systems and full support for unmodified guests, which
    6.26 +run natively on the processor without need for emulation, under VT.
    6.27 +Full support for VT and Pacifica chipsets will appear in early 2006.
    6.28 +
    6.29 +Xen support is available for increasingly many operating systems:
    6.30 +currently, Linux and NetBSD are available for Xen 3.0. A FreeBSD port is
    6.31  undergoing testing and will be incorporated into the release soon. Other
    6.32  OS ports, including Plan 9, are in progress. We hope that that arch-xen
    6.33  patches will be incorporated into the mainstream releases of these
    6.34  operating systems in due course (as has already happened for NetBSD).
    6.35  
    6.36 +%% KMSelf Thu Dec  1 14:59:02 PST 2005 PPC port status?
    6.37 +
    6.38  Possible usage scenarios for Xen include:
    6.39  
    6.40  \begin{description}
     7.1 --- a/docs/src/user/memory_management.tex	Fri Dec 02 17:13:55 2005 -0700
     7.2 +++ b/docs/src/user/memory_management.tex	Fri Dec 02 19:59:53 2005 -0700
     7.3 @@ -2,7 +2,7 @@
     7.4  
     7.5  \section{Managing Domain Memory}
     7.6  
     7.7 -XenLinux domains have the ability to relinquish / reclaim machine
     7.8 +XenLinux domains have the ability to relinquish/reclaim machine
     7.9  memory at the request of the administrator or the user of the domain.
    7.10  
    7.11  \subsection{Setting memory footprints from dom0}