ia64/xen-unstable

changeset 2729:62cd5cf03a25

bitkeeper revision 1.1159.1.271 (417cf56eQ1nMU54kmKi7G1wblYMknQ)

Update docs a bit and remove howtos that were outdated or mostly
covered by the main manual.
author mwilli2@equilibrium.research
date Mon Oct 25 12:45:34 2004 +0000 (2004-10-25)
parents 855925dd3bae
children 145b7783c604
files .rootkeys docs/HOWTOs/Console-HOWTO docs/HOWTOs/Sched-HOWTO docs/HOWTOs/VBD-HOWTO docs/HOWTOs/Xen-HOWTO docs/user.tex
line diff
     1.1 --- a/.rootkeys	Mon Oct 25 10:31:11 2004 +0000
     1.2 +++ b/.rootkeys	Mon Oct 25 12:45:34 2004 +0000
     1.3 @@ -7,10 +7,6 @@ 3eb788d6Kleck_Cut0ouGneviGzliQ Makefile
     1.4  3f5ef5a24IaQasQE2tyMxrfxskMmvw README
     1.5  3f5ef5a2l4kfBYSQTUaOyyD76WROZQ README.CD
     1.6  3f69d8abYB1vMyD_QVDvzxy5Zscf1A TODO
     1.7 -405ef604hIZH5pGi2uwlrlSvUMrutw docs/HOWTOs/Console-HOWTO
     1.8 -4083e798FbE1MIsQaIYvjnx1uvFhBg docs/HOWTOs/Sched-HOWTO
     1.9 -40083bb4LVQzRqA3ABz0__pPhGNwtA docs/HOWTOs/VBD-HOWTO
    1.10 -4021053fmeFrEyPHcT8JFiDpLNgtHQ docs/HOWTOs/Xen-HOWTO
    1.11  4022a73cgxX1ryj1HgS-IwwB6NUi2A docs/HOWTOs/XenDebugger-HOWTO
    1.12  3f9e7d53iC47UnlfORp9iC1vai6kWw docs/Makefile
    1.13  412f4bd9sm5mCQ8BkrgKcAKZGadq7Q docs/blkif-drivers-explained.txt
     2.1 --- a/docs/HOWTOs/Console-HOWTO	Mon Oct 25 10:31:11 2004 +0000
     2.2 +++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
     2.3 @@ -1,85 +0,0 @@
     2.4 -    New console I/O infrastructure in Xen 2.0
     2.5 -    =========================================
     2.6 -
     2.7 -    Keir Fraser, University of Cambridge, 3rd June 2004
     2.8 -
     2.9 - I thought I'd write a quick note about using the new console I/O
    2.10 - infrastructure in Xen 2.0. Significant new features compared with 1.2,
    2.11 - and with older revisions of 1.3, include:
    2.12 -  - bi-directional console access
    2.13 -  - log in to a Xenolinux guest OS via its virtual console
    2.14 -  - a new terminal client (replaces the use of telnet in character mode)
    2.15 -  - proper handling of terminal emulation
    2.16 -
    2.17 -Accessing the virtual console from within the guest OS
    2.18 -------------------------------------------------------
    2.19 - Every Xenolinux instance owns a bidirectional 'virtual console'.
    2.20 - The device node to which this console is attached can be configured
    2.21 - by specifying 'xencons=' on the OS command line:
    2.22 -  'xencons=off'  --> disable virtual console
    2.23 -  'xencons=tty'  --> attach console to /dev/tty1 (tty0 at boot-time)
    2.24 -  'xencons=ttyS' --> attach console to /dev/ttyS0
    2.25 - The default is to attach to /dev/tty1, and also to create dummy
    2.26 - devices for /dev/tty2-63 to avoid warnings from many standard distro
    2.27 - startup scripts. The exception is domain 0, which by default attaches
    2.28 - to /dev/ttyS0.
    2.29 -
    2.30 -Domain 0 virtual console
    2.31 -------------------------
    2.32 - The virtual console for domain 0 is shared with Xen's console. For
    2.33 - example, if you specify 'console=com1' as a boot parameter to Xen,
    2.34 - then domain 0 will have bi-directional access to the primary serial
    2.35 - line. Boot-time messages can be directed to the virtual console by
    2.36 - specifying 'console=ttyS0' as a boot parameter to Xenolinux.
    2.37 -
    2.38 -Connecting to the virtual console
    2.39 ----------------------------------
    2.40 - Domain 0 console may be accessed using the supplied 'miniterm' program
    2.41 - if raw serial access is desired. If the Xen machine is connected to a
    2.42 - serial-port server, then the supplied 'xencons' program may be used to
    2.43 - connect to the appropriate TCP port on the server:
    2.44 -  # xencons <server host> <server port>
    2.45 -
    2.46 -Logging in via virtual console
    2.47 -------------------------------
    2.48 - It is possible to log in to a guest OS via its virtual console if a
    2.49 - 'getty' is running. In most domains the virtual console is named tty1
    2.50 - so standard startup scripts and /etc/inittab should work
    2.51 - fine. Furthermore, tty2-63 are created as dummy console devices to
    2.52 - suppress warnings from standard startup scripts. If the OS has
    2.53 - attached the virtual console to /dev/ttyS0 then you will need to
    2.54 - start a 'mingetty' on that device node.
    2.55 -
    2.56 -Virtual console for other domains
    2.57 ----------------------------------
    2.58 - Every guest OS has a virtual console that is accessible via
    2.59 - 'console=tty0' at boot time (or 'console=xencons0' for domain 0), and
    2.60 - mingetty running on /dev/tty1 (or /dev/xen/cons for domain 0).
    2.61 - However, domains other than domain 0 do not have access to the
    2.62 - physical serial line. Instead, their console data is sent to and from
    2.63 - a control daemon running in domain 0. When properly installed, this
    2.64 - daemon can be started from the init scripts (e.g., rc.local):
    2.65 -  # /usr/sbin/xend start
    2.66 -
    2.67 - Alternatively, Redhat- and LSB-compatible Linux installations can use
    2.68 - the provided init.d script. To integrate startup and shutdown of xend
    2.69 - in such a system, you will need to run a few configuration commands:
    2.70 -  # chkconfig --add xend
    2.71 -  # chkconfig --level 35 xend on
    2.72 -  # chkconfig --level 01246 xend off
    2.73 - This will avoid the need to run xend manually from rc.local, for example.
    2.74 -
    2.75 - Note that, when a domain is created using xc_dom_create.py, xend MUST
    2.76 - be running. If everything is set up correctly then xc_dom_create will
    2.77 - print the local TCP port to which you should connect to perform
    2.78 - console I/O. A suitable console client is provided by the Python
    2.79 - module xenctl.console_client: running this module from the command
    2.80 - line with <host> and <port> parameters will start a terminal
    2.81 - session. This module is also installed as /usr/bin/xencons, from a
    2.82 - copy in tools/misc/xencons. For example:
    2.83 -  # xencons localhost 9600
    2.84 -
    2.85 - An alternative to manually running a terminal client is to specify
    2.86 - '-c' to xm create, or add 'auto_console=True' to the defaults
    2.87 - file. This will cause xm create to automatically become the
    2.88 - console terminal after starting the domain.
     3.1 --- a/docs/HOWTOs/Sched-HOWTO	Mon Oct 25 10:31:11 2004 +0000
     3.2 +++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
     3.3 @@ -1,135 +0,0 @@
     3.4 -Xen Scheduler HOWTO
     3.5 -===================
     3.6 -
     3.7 -by Mark Williamson
     3.8 -(c) 2004 Intel Research Cambridge
     3.9 -
    3.10 -
    3.11 -Introduction
    3.12 -------------
    3.13 -
    3.14 -Xen offers a choice of CPU schedulers.  All available schedulers are
    3.15 -included in Xen at compile time and the administrator may select a
    3.16 -particular scheduler using a boot-time parameter to Xen.  It is
    3.17 -expected that administrators will choose the scheduler most
    3.18 -appropriate to their application and configure the machine to boot
    3.19 -with that scheduler.
    3.20 -
    3.21 -Note: the default scheduler is the Borrowed Virtual Time (BVT)
    3.22 -scheduler which was also used in previous releases of Xen.  No
    3.23 -configuration changes are required to keep using this scheduler.
    3.24 -
    3.25 -This file provides a brief description of the CPU schedulers available
    3.26 -in Xen, what they are useful for and the parameters that are used to
    3.27 -configure them.  This information is necessarily fairly technical at
    3.28 -the moment.  The recommended way to fully understand the scheduling
    3.29 -algorithms is to read the relevant research papers.
    3.30 -
    3.31 -The interface to the schedulers is basically "raw" at the moment,
    3.32 -without sanity checking - administrators should be careful when
    3.33 -setting the parameters since it is possible for a mistake to hang
    3.34 -domains, or the entire system (in particular, double check parameters
    3.35 -for sanity and make sure that DOM0 will get enough CPU time to remain
    3.36 -usable).  Note that xc_dom_control.py takes time values in
    3.37 -nanoseconds.
    3.38 -
    3.39 -Future tools will implement friendlier control interfaces.
    3.40 -
    3.41 -
    3.42 -Borrowed Virtual Time (BVT)
    3.43 ----------------------------
    3.44 -
    3.45 -All releases of Xen have featured the BVT scheduler, which is used to
    3.46 -provide proportional fair shares of the CPU based on weights assigned
    3.47 -to domains.  BVT is "work conserving" - the CPU will never be left
    3.48 -idle if there are runnable tasks.
    3.49 -
    3.50 -BVT uses "virtual time" to make decisions on which domain should be
    3.51 -scheduled on the processor.  Each time a scheduling decision is
    3.52 -required, BVT evaluates the "Effective Virtual Time" of all domains
    3.53 -and then schedules the domain with the least EVT.  Domains are allowed
    3.54 -to "borrow" virtual time by "time warping", which reduces their EVT by
    3.55 -a certain amount, so that they may be scheduled sooner.  In order to
    3.56 -maintain long term fairness, there are limits on when a domain can
    3.57 -time warp and for how long.  [ For more details read the SOSP'99 paper
    3.58 -by Duda and Cheriton ]
    3.59 -
    3.60 -In the Xen implementation, domains time warp when they unblock, so
    3.61 -that domain wakeup latencies are reduced.
    3.62 -
    3.63 -The BVT algorithm uses the following per-domain parameters (set using
    3.64 -xc_dom_control.py cpu_bvtset):
    3.65 -
    3.66 -* mcuadv - the MCU (Minimum Charging Unit) advance determines the
    3.67 -           proportional share of the CPU that a domain receives.  It
    3.68 -           is set inversely proportionally to a domain's sharing weight.
    3.69 -* warp   - the amount of "virtual time" the domain is allowed to warp
    3.70 -           backwards
    3.71 -* warpl  - the warp limit is the maximum time a domain can run warped for
    3.72 -* warpu  - the unwarp requirement is the minimum time a domain must
    3.73 -           run unwarped for before it can warp again
    3.74 -
    3.75 -BVT also has the following global parameter (set using
    3.76 -xc_dom_control.py cpu_bvtslice):
    3.77 -
    3.78 -* ctx_allow - the context switch allowance is similar to the "quantum"
    3.79 -              in traditional schedulers.  It is the minimum time that
    3.80 -              a scheduled domain will be allowed to run before be
    3.81 -              pre-empted.  This prevents thrashing of the CPU.
    3.82 -
    3.83 -BVT can now be selected by passing the 'sched=bvt' argument to Xen at
    3.84 -boot-time and is the default scheduler if no 'sched' argument is
    3.85 -supplied.
    3.86 -
    3.87 -Atropos
    3.88 --------
    3.89 -
    3.90 -Atropos is a scheduler originally developed for the Nemesis multimedia
    3.91 -operating system.  Atropos can be used to reserve absolute shares of
    3.92 -the CPU.  It also includes some features to improve the efficiency of
    3.93 -domains that block for I/O and to allow spare CPU time to be shared
    3.94 -out.
    3.95 -
    3.96 -The Atropos algorithm has the following parameters for each domain
    3.97 -(set using xc_dom_control.py cpu_atropos_set):
    3.98 -
    3.99 - * slice    - The length of time per period that a domain is guaranteed.
   3.100 - * period   - The period over which a domain is guaranteed to receive
   3.101 -              its slice of CPU time.
   3.102 - * latency  - The latency hint is used to control how soon after
   3.103 -              waking up a domain should be scheduled.
   3.104 - * xtratime - This is a true (1) / false (0) flag that specifies whether
   3.105 -             a domain should be allowed a share of the system slack time.
   3.106 -
   3.107 -Every domain has an associated period and slice.  The domain should
   3.108 -receive 'slice' nanoseconds every 'period' nanoseconds.  This allows
   3.109 -the administrator to configure both the absolute share of the CPU a
   3.110 -domain receives and the frequency with which it is scheduled.  When
   3.111 -domains unblock, their period is reduced to the value of the latency
   3.112 -hint (the slice is scaled accordingly so that they still get the same
   3.113 -proportion of the CPU).  For each subsequent period, the slice and
   3.114 -period times are doubled until they reach their original values.
   3.115 -
   3.116 -Atropos is selected by adding 'sched=atropos' to Xen's boot-time
   3.117 -arguments.
   3.118 -
   3.119 -Note: don't overcommit the CPU when using Atropos (i.e. don't reserve
   3.120 -more CPU than is available - the utilisation should be kept to
   3.121 -slightly less than 100% in order to ensure predictable behaviour).
   3.122 -
   3.123 -Round-Robin
   3.124 ------------
   3.125 -
   3.126 -The Round-Robin scheduler is provided as a simple example of Xen's
   3.127 -internal scheduler API.  For production systems, one of the other
   3.128 -schedulers should be used, since they are more flexible and more
   3.129 -efficient.
   3.130 -
   3.131 -The Round-robin scheduler has one global parameter (set using
   3.132 -xc_dom_control.py cpu_rrobin_slice):
   3.133 -
   3.134 - * rr_slice - The time for which each domain runs before the next
   3.135 -              scheduling decision is made.
   3.136 -
   3.137 -The Round-Robin scheduler can be selected by adding 'sched=rrobin' to
   3.138 -Xen's boot-time arguments.
     4.1 --- a/docs/HOWTOs/VBD-HOWTO	Mon Oct 25 10:31:11 2004 +0000
     4.2 +++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
     4.3 @@ -1,437 +0,0 @@
     4.4 -Virtual Block Devices / Virtual Disks in Xen - HOWTO
     4.5 -====================================================
     4.6 -
     4.7 -HOWTO for Xen 1.2
     4.8 -
     4.9 -Mark A. Williamson (mark.a.williamson@intel.com)
    4.10 -(C) Intel Research Cambridge 2004
    4.11 -
    4.12 -Introduction
    4.13 -------------
    4.14 -
    4.15 -This document describes the new Virtual Block Device (VBD) and Virtual Disk
    4.16 -features available in Xen release 1.2.  First, a brief introduction to some
    4.17 -basic disk concepts on a Xen system:
    4.18 -
    4.19 -Virtual Block Devices (VBDs):
    4.20 -	VBDs are the disk abstraction provided by Xen.  All XenoLinux disk accesses
    4.21 -	go through the VBD driver.  Using the VBD functionality, it is possible
    4.22 -	to selectively grant domains access to portions of the physical disks
    4.23 -	in the system.
    4.24 -
    4.25 -	A virtual block device can also consist of multiple extents from the
    4.26 -	physical disks in the system, allowing them to be accessed as a single
    4.27 -	uniform device from the domain with access to that VBD.  The
    4.28 -	functionality is somewhat similar to that underpinning LVM, since
    4.29 -	you can combine multiple regions from physical devices into a single
    4.30 -	logical device, from the point of view of a guest virtual machine.
    4.31 -
    4.32 -	Everyone who boots Xen / XenoLinux from a hard drive uses VBDs
    4.33 -	but for some uses they can almost be ignored.
    4.34 -
    4.35 -Virtual Disks (VDs):
    4.36 -	VDs are an abstraction built on top of the functionality provided by
    4.37 -	VBDs.  The VD management code maintains a "free pool" of disk space on
    4.38 -	the system that has been reserved for use with VDs.  The tools can
    4.39 -	automatically allocate collections of extents from this free pool to
    4.40 -	create "virtual disks" on demand.
    4.41 -
    4.42 -	VDs can then be used just like normal disks by domains.  VDs appear
    4.43 -	just like any other disk to guest domains, since they use the same VBD
    4.44 -	abstraction, as provided by Xen.
    4.45 -
    4.46 -	Using VDs is optional, since it's always possible to dedicate
    4.47 -	partitions, or entire disks to your virtual machines.  VDs are handy
    4.48 -	when you have a dynamically changing set of virtual machines and you
    4.49 -	don't want to have to keep repartitioning in order to provide them with
    4.50 -	disk space.
    4.51 -
    4.52 -	Virtual Disks are rather like "logical volumes" in LVM.
    4.53 -
    4.54 -If that didn't all make sense, it doesn't matter too much ;-)  Using the
    4.55 -functionality is fairly straightforward and some examples will clarify things.
    4.56 -The text below expands a bit on the concepts involved, finishing up with a
    4.57 -walk-through of some simple virtual disk management tasks.
    4.58 -
    4.59 -
    4.60 -Virtual Block Devices
    4.61 ----------------------
    4.62 -
    4.63 -Before covering VD management, it's worth discussing some aspects of the VBD
    4.64 -functionality that will be useful to know.
    4.65 -
    4.66 -A VBD is made up of a number of extents from physical disk devices.  The
    4.67 -extents for a VBD don't have to be contiguous, or even on the same device.  Xen
    4.68 -performs address translation so that they appear as a single contiguous
    4.69 -device to a domain.
    4.70 -
    4.71 -When the VBD layer is used to give access to entire drives or entire
    4.72 -partitions, the VBDs simply consist of a single extent that corresponds to the
    4.73 -drive or partition used.  Lists of extents are usually only used when virtual
    4.74 -disks (VDs) are being used.
    4.75 -
    4.76 -Xen 1.2 and its associated XenoLinux release support automatic registration /
    4.77 -removal of VBDs.  It has always been possible to add a VBD to a running
    4.78 -XenoLinux domain but it was then necessary to run the "xen_vbd_refresh" tool in
    4.79 -order for the new device to be detected.  Nowadays, when a VBD is added, the
    4.80 -domain it's added to automatically registers the disk, with no special action
    4.81 -by the user being required.
    4.82 -
    4.83 -Note that it is possible to use the VBD functionality to allow multiple domains
    4.84 -write access to the same areas of disk.  This is almost always a bad thing!
    4.85 -The provided example scripts for creating domains do their best to check that
    4.86 -disk areas are not shared unsafely and will catch many cases of this.  Setting
    4.87 -the vbd_expert variable in config files for xc_dom_create.py controls how
    4.88 -unsafe it allows VBD mappings to be - 0 (read only sharing allowed) should be
    4.89 -right for most people ;-).  Level 1 attempts to allow at most one writer to any
    4.90 -area of disk.  Level 2 allows multiple writers (i.e. anything!).
    4.91 -
    4.92 -
    4.93 -Virtual Disk Management
    4.94 ------------------------
    4.95 -
    4.96 -The VD management code runs entirely in user space.  The code is written in
    4.97 -Python and can therefore be accessed from custom scripts, as well as from the
    4.98 -convenience scripts provided.  The underlying VD database is a SQLite database
    4.99 -in /var/db/xen_vdisks.sqlite.
   4.100 -
   4.101 -Most virtual disk management can be performed using the xc_vd_tool.py script
   4.102 -provided in the tools/examples/ directory of the source tree.  It supports the
   4.103 -following operations:
   4.104 -
   4.105 -initialise -	     "Formats" a partition or disk device for use storing
   4.106 -		     virtual disks.  This does not actually write data to the
   4.107 -		     specified device.  Rather, it adds the device to the VD
   4.108 -		     free-space pool, for later allocation.
   4.109 -
   4.110 -		     You should only add devices that correspond directly to
   4.111 -		     physical disks / partitions - trying to use a VBD that you
   4.112 -		     have created yourself as part of the free space pool has
   4.113 -		     undefined (possibly nasty) results.
   4.114 -
   4.115 -create -	     Creates a virtual disk of specified size by allocating space
   4.116 -		     from the free space pool.  The virtual disk is identified
   4.117 -		     in future by the unique ID returned by this script.
   4.118 -
   4.119 -		     The disk can be given an expiry time, if desired.  For
   4.120 -		     most users, the best idea is to specify a time of 0 (which
   4.121 -		     has the special meaning "never expire") and then
   4.122 -		     explicitly delete the VD when finished with it -
   4.123 -		     otherwise, VDs will disappear if allowed to expire.
   4.124 -
   4.125 -delete -	     Explicitly delete a VD.  Makes it disappear immediately!
   4.126 -
   4.127 -setexpiry -	     Allows the expiry time of a (not yet expired) virtual disk
   4.128 -		     to be modified.  Be aware the VD will disappear when the
   4.129 -		     time has expired.
   4.130 -
   4.131 -enlarge -            Increase the allocation of space to a virtual disk.
   4.132 -		     Currently this will not be immediately visible to running
   4.133 -		     domain(s) using it.  You can make it visible by destroying
   4.134 -		     the corresponding VBDs and then using xc_dom_control.py to
   4.135 -		     add them to the domain again.  Note: doing this to
   4.136 -		     filesystems that are in use may well cause errors in the
   4.137 -		     guest Linux, or even a crash although it will probably be
   4.138 -		     OK if you stop the domain before updating the VBD and
   4.139 -		     restart afterwards.
   4.140 -
   4.141 -import -	     Allocate a virtual disk and populate it with the contents of
   4.142 -		     some disk file.  This can be used to import root file system
   4.143 -		     images or to restore backups of virtual disks, for instance.
   4.144 -
   4.145 -export -	     Write the contents of a virtual disk out to a disk file.
   4.146 -		     Useful for creating disk images for use elsewhere, such as
   4.147 -		     standard root file systems and backups.
   4.148 -
   4.149 -list -		     List the non-expired virtual disks currently available in the
   4.150 -		     system.
   4.151 -
   4.152 -undelete -	     Attempts to recover an expired (or deleted) virtual disk.
   4.153 -
   4.154 -freespace -	     Get the free space (in megabytes) available for allocating
   4.155 -		     new virtual disk extents.
   4.156 -
   4.157 -The functionality provided by these scripts is also available directly from
   4.158 -Python functions in the xenctl.utils module - you can use this functionality in
   4.159 -your own scripts.
   4.160 -
   4.161 -Populating VDs:
   4.162 -
   4.163 -Once you've created a VD, you might want to populate it from DOM0 (for
   4.164 -instance, to put a root file system onto it for a guest domain).  This can be
   4.165 -done by creating a VBD for dom0 to access the VD through - this is discussed
   4.166 -below.
   4.167 -
   4.168 -More detail on how virtual disks work:
   4.169 -
   4.170 -When you "format" a device for virtual disks, the device is logically split up
   4.171 -into extents.  These extents are recorded in the Virtual Disk Management
   4.172 -database in /var/db/xen_vdisks.sqlite.
   4.173 -
   4.174 -When you use xc_vd_tool.py to add create a virtual disk, some of the extents in
   4.175 -the free space pool are reallocated for that virtual disk and a record for that
   4.176 -VD is added to the database.  When VDs are mapped into domains as VBDs, the
   4.177 -system looks up the allocated extents for the virtual disk in order to set up
   4.178 -the underlying VBD.
   4.179 -
   4.180 -Free space is identified by the fact that it belongs to an "expired" disk.
   4.181 -When "initialising" with xc_vd_tool.py adds a real device to the free pool, it
   4.182 -actually divides the device into extents and adds them to an already-expired
   4.183 -virtual disk.  The allocated device is not written to during this operation -
   4.184 -its availability is simply recorded into the virtual disks database.
   4.185 -
   4.186 -If you set an expiry time on a VD, its extents will be liable to be reallocated
   4.187 -to new VDs as soon as that expiry time runs out.  Therefore, be careful when
   4.188 -setting expiry times!  Many users will find it simplest to set all VDs to not
   4.189 -expire automatically, then explicitly delete them later on.
   4.190 -
   4.191 -Deleted / expired virtual disks may sometimes be undeleted - currently this
   4.192 -only works when none of the virtual disk's extents have been reallocated to
   4.193 -other virtual disks, since that's the only situation where the disk is likely
   4.194 -to be fully intact.  You should try undeletion as soon as you realise you've
   4.195 -mistakenly deleted (or allowed to expire) a virtual disk.  At some point in the
   4.196 -future, an "unsafe" undelete which can recover what remains of partially
   4.197 -reallocated virtual disks may also be implemented.
   4.198 -
   4.199 -Security note:
   4.200 -
   4.201 -The disk space for VDs is not zeroed when it is initially added to the free
   4.202 -space pool OR when a VD expires OR when a VD is created.  Therefore, if this is
   4.203 -not done manually it is possible for a domain to read a VD to determine what
   4.204 -was written by previous owners of its constituent extents.  If this is a
   4.205 -problem, users should manually clean VDs in some way either on allocation, or
   4.206 -just before deallocation (automated support for this may be added at a later
   4.207 -date).
   4.208 -
   4.209 -
   4.210 -Side note: The xvd* devices
   4.211 ----------------------------
   4.212 -
   4.213 -The examples in this document make frequent use of the xvd* device nodes for
   4.214 -representing virtual block devices.  It is not a requirement to use these with
   4.215 -Xen, since VBDs can be mapped to any IDE or SCSI device node in the system.
   4.216 -Changing the the references to xvd* nodes in the examples below to refer to
   4.217 -some unused hd* or sd* node would also be valid.
   4.218 -
   4.219 -They can be useful when accessing VBDs from dom0, since binding VBDs to xvd*
   4.220 -devices under will avoid clashes with real IDE or SCSI drives.
   4.221 -
   4.222 -There is a shell script provided in tools/misc/xen-mkdevnodes to create these
   4.223 -nodes.  Specify on the command line the directory that the nodes should be
   4.224 -placed under (e.g. /dev):
   4.225 -
   4.226 -> cd {root of Xen source tree}/tools/misc/
   4.227 -> ./xen-mkdevnodes /dev
   4.228 -
   4.229 -
   4.230 -Dynamically Registering VBDs
   4.231 -----------------------------
   4.232 -
   4.233 -The domain control tool (xc_dom_control.py) includes the ability to add and
   4.234 -remove VBDs to / from running domains.  As usual, the command format is:
   4.235 -
   4.236 -xc_dom_control.py [operation] [arguments]
   4.237 -
   4.238 -The operations (and their arguments) are as follows:
   4.239 -
   4.240 -vbd_add dom uname dev mode - Creates a VBD corresponding to either a physical
   4.241 -		             device or a virtual disk and adds it as a
   4.242 -		             specified device under the target domain, with
   4.243 -		             either read or write access.
   4.244 -
   4.245 -vbd_remove dom dev	   - Removes the VBD associated with a specified device
   4.246 -			     node from the target domain.
   4.247 -
   4.248 -These scripts are most useful when populating VDs.  VDs can't be populated
   4.249 -directly, since they don't correspond to real devices.  Using:
   4.250 -
   4.251 -  xc_dom_control.py vbd_add 0 vd:your_vd_id /dev/whatever w
   4.252 -
   4.253 -You can make a virtual disk available to DOM0.  Sensible devices to map VDs to
   4.254 -in DOM0 are the /dev/xvd* nodes, since that makes it obvious that they are Xen
   4.255 -virtual devices that don't correspond to real physical devices.
   4.256 -
   4.257 -You can then format, mount and populate the VD through the nominated device
   4.258 -node.  When you've finished, use:
   4.259 -
   4.260 -  xc_dom_control.py vbd_remove 0 /dev/whatever
   4.261 -
   4.262 -To revoke DOM0's access to it.  It's then ready for use in a guest domain.
   4.263 -
   4.264 -
   4.265 -
   4.266 -You can also use this functionality to grant access to a physical device to a
   4.267 -guest domain - you might use this to temporarily share a partition, or to add
   4.268 -access to a partition that wasn't granted at boot time.
   4.269 -
   4.270 -When playing with VBDs, remember that in general, it is only safe for two
   4.271 -domains to have access to a file system if they both have read-only access.  You
   4.272 -shouldn't be trying to share anything which is writable, even if only by one
   4.273 -domain, unless you're really sure you know what you're doing!
   4.274 -
   4.275 -
   4.276 -Granting access to real disks and partitions
   4.277 ---------------------------------------------
   4.278 -
   4.279 -During the boot process, Xen automatically creates a VBD for each physical disk
   4.280 -and gives Dom0 read / write access to it.  This makes it look like Dom0 has
   4.281 -normal access to the disks, just as if Xen wasn't being used - in reality, even
   4.282 -Dom0 talks to disks through Xen VBDs.
   4.283 -
   4.284 -To give another domain access to a partition or whole disk then you need to
   4.285 -create a corresponding VBD for that partition, for use by that domain.  As for
   4.286 -virtual disks, you can grant access to a running domain, or specify that the
   4.287 -domain should have access when it is first booted.
   4.288 -
   4.289 -To grant access to a physical partition or disk whilst a domain is running, use
   4.290 -the xc_dom_control.py script - the usage is very similar to the case of adding
   4.291 -access virtual disks to a running domain (described above).  Specify the device
   4.292 -as "phy:device", where device is the name of the device as seen from domain 0,
   4.293 -or from normal Linux without Xen.  For instance:
   4.294 -
   4.295 -> xc_dom_control.py vbd_add 2 phy:hdc /dev/whatever r
   4.296 -
   4.297 -Will grant domain 2 read-only access to the device /dev/hdc (as seen from Dom0
   4.298 -/ normal Linux running on the same machine - i.e. the master drive on the
   4.299 -secondary IDE chain), as /dev/whatever in the target domain.
   4.300 -
   4.301 -Note that you can use this within domain 0 to map disks / partitions to other
   4.302 -device nodes within domain 0.  For instance, you could map /dev/hda to also be
   4.303 -accessible through /dev/xvda.  This is not generally recommended, since if you
   4.304 -(for instance) mount both device nodes read / write you could cause corruption
   4.305 -to the underlying filesystem.  It's also quite confusing ;-)
   4.306 -
   4.307 -To grant a domain access to a partition or disk when it boots, the appropriate
   4.308 -VBD needs to be created before the domain is started.  This can be done very
   4.309 -easily using the tools provided.  To specify this to the xc_dom_create.py tool
   4.310 -(either in a startup script or on the command line) use triples of the format:
   4.311 -
   4.312 -  phy:dev,target_dev,perms
   4.313 -
   4.314 -Where dev is the device name as seen from Dom0, target_dev is the device you
   4.315 -want it to appear as in the target domain and perms is 'w' if you want to give
   4.316 -write privileges, or 'r' otherwise.
   4.317 -
   4.318 -These may either be specified on the command line or in an initialisation
   4.319 -script.  For instance, to grant the same access rights as described by the
   4.320 -command example above, you would use the triple:
   4.321 -
   4.322 -  phy:hdc,/dev/whatever,r
   4.323 -
   4.324 -If you are using a config file, then you should add this triple into the
   4.325 -vbd_list variable, for instance using the line:
   4.326 -
   4.327 -  vbd_list = [ ('phy:dev', 'hdc', 'r') ]
   4.328 -
   4.329 -(Note that you need to use quotes here, since config files are really small
   4.330 -Python scripts.)
   4.331 -
   4.332 -To specify the mapping on the command line, you'd use the -d switch and supply
   4.333 -the triple as the argument, e.g.:
   4.334 -
   4.335 -> xc_dom_create.py [other arguments] -d phy:hdc,/dev/whatever,r
   4.336 -
   4.337 -(You don't need to explicitly quote things in this case.)
   4.338 -
   4.339 -
   4.340 -Walk-through: Booting a domain from a VD
   4.341 -----------------------------------------
   4.342 -
   4.343 -As an example, here is a sequence of commands you might use to create a virtual
   4.344 -disk, populate it with a root file system and boot a domain from it.  These
   4.345 -steps assume that you've installed the example scripts somewhere on your PATH -
   4.346 -if you haven't done that, you'll need to specify a fully qualified pathname in
   4.347 -the examples below.  It is also assumed that you know how to use the
   4.348 -xc_dom_create.py tool (apart from configuring virtual disks!)
   4.349 -
   4.350 -[ This example is intended only for users of virtual disks (VDs).  You don't
   4.351 -need to follow this example if you'll be booting a domain from a dedicated
   4.352 -partition, since you can create that partition and populate it, directly from
   4.353 -Dom0, as normal. ]
   4.354 -
   4.355 -First, if you haven't done so already, you'll initialise the free space pool by
   4.356 -adding a real partition to it.  The details are stored in the database, so
   4.357 -you'll only need to do it once.  You can also use this command to add further
   4.358 -partitions to the existing free space pool.
   4.359 -
   4.360 -> xc_vd_tool.py format /dev/<real partition>
   4.361 -
   4.362 -Now you'll want to allocate the space for your virtual disk.  Do so using the
   4.363 -following, specifying the size in megabytes.
   4.364 -
   4.365 -> xc_vd_tool.py create <size in megabytes>
   4.366 -
   4.367 -At this point, the program will tell you the virtual disk ID.  Note it down, as
   4.368 -it is how you will identify the virtual device in future.
   4.369 -
   4.370 -If you don't want the VD to be bootable (i.e. you're booting a domain from some
   4.371 -other medium and just want it to be able to access this VD), you can simply add
   4.372 -it to the vbd_list used by xc_dom_create.py, either by putting it in a config
   4.373 -file or by specifying it on the command line.  Formatting / populating of the
   4.374 -VD could then done from that domain once it's started.
   4.375 -
   4.376 -If you want to boot off your new VD as well then you need to populate it with a
   4.377 -standard Linux root filesystem.  You'll need to temporarily add the VD to DOM0
   4.378 -in order to do this.  To give DOM0 r/w access to the VD, use the following
   4.379 -command line, substituting the ID you got earlier.
   4.380 -
   4.381 -> xc_dom_control.py vbd_add 0 vd:<id> /dev/xvda w
   4.382 -
   4.383 -This attaches the VD to the device /dev/xvda in domain zero, with read / write
   4.384 -privileges - you can use other devices nodes if you choose too.
   4.385 -
   4.386 -Now make a filesystem on this device, mount it and populate it with a root
   4.387 -filesystem.  These steps are exactly the same as under normal Linux.  When
   4.388 -you've finished, unmount the filesystem again.
   4.389 -
   4.390 -You should now remove the VD from DOM0.  This will prevent you accidentally
   4.391 -changing it in DOM0, whilst the guest domain is using it (which could cause
   4.392 -filesystem corruption, and confuse Linux).
   4.393 -
   4.394 -> xc_dom_control.py vbd_remove 0 /dev/xvda
   4.395 -
   4.396 -It should now be possible to boot a guest domain from the VD.  To do this, you
   4.397 -should specify the the VD's details in some way so that xc_dom_create.py will
   4.398 -be able to set up the corresponding VBD for the domain to access.  If you're
   4.399 -using a config file, you should include:
   4.400 -
   4.401 -  ('vd:<id>', '/dev/whatever', 'w')
   4.402 -
   4.403 -In the vbd_list, substituting the appropriate virtual disk ID, device node and
   4.404 -read / write setting.
   4.405 -
   4.406 -To specify access on the command line, as you start the domain, you would use
   4.407 -the -d switch (note that you don't need to use quote marks here):
   4.408 -
   4.409 -> xc_dom_create.py [other arguments] -d vd:<id>,/dev/whatever,w
   4.410 -
   4.411 -To tell Linux which device to boot from, you should either include:
   4.412 -
   4.413 -  root=/dev/whatever
   4.414 -
   4.415 -in your cmdline_root in the config file, or specify it on the command line,
   4.416 -using the -R option:
   4.417 -
   4.418 -> xc_dom_create.py [other arguments] -R root=/dev/whatever
   4.419 -
   4.420 -That should be it: sit back watch your domain boot off its virtual disk!
   4.421 -
   4.422 -
   4.423 -Getting help
   4.424 -------------
   4.425 -
   4.426 -The main source of help using Xen is the developer's e-mail list:
   4.427 -<xen-devel@lists.sourceforge.net>.  The developers will help with problems,
   4.428 -listen to feature requests and do bug fixes.  It is, however, helpful if you
   4.429 -can look through the mailing list archives and HOWTOs provided to make sure
   4.430 -your question is not answered there.  If you post to the list, please provide
   4.431 -as much information as possible about your setup and your problem.
   4.432 -
   4.433 -There is also a general Xen FAQ, kindly started by Jan van Rensburg, which (at
   4.434 -time of writing) is located at: <http://xen.epiuse.com/xen-faq.txt>.
   4.435 -
   4.436 -Contributing
   4.437 -------------
   4.438 -
   4.439 -Patches and extra documentation are also welcomed ;-) and should also be posted
   4.440 -to the xen-devel e-mail list.
     5.1 --- a/docs/HOWTOs/Xen-HOWTO	Mon Oct 25 10:31:11 2004 +0000
     5.2 +++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
     5.3 @@ -1,416 +0,0 @@
     5.4 -###########################################
     5.5 -Xen HOWTO
     5.6 -
     5.7 -University of Cambridge Computer Laboratory
     5.8 -
     5.9 -http://www.cl.cam.ac.uk/netos/xen
    5.10 -#############################
    5.11 -
    5.12 -
    5.13 -Get Xen Source Code
    5.14 -=============================
    5.15 -
    5.16 -The public master BK repository for the 1.2 release lives at:
    5.17 -'bk://xen.bkbits.net/xeno-1.2.bk'
    5.18 -The current unstable release (1.3) is available at:
    5.19 -'bk://xen.bkbits.net/xeno-unstable.bk'
    5.20 -
    5.21 -To fetch a local copy, first download the BitKeeper tools at:
    5.22 -http://www.bitmover.com/download with username 'bitkeeper' and
    5.23 -password 'get bitkeeper'.
    5.24 -
    5.25 -Then install the tools and then run:
    5.26 -# bk clone bk://xen.bkbits.net/xeno-1.2.bk
    5.27 -
    5.28 -Under your current directory, a new directory named 'xeno-1.2.bk' has
    5.29 -been created, which contains all the necessary source codes for the
    5.30 -Xen hypervisor and Linux guest OSes.
    5.31 -
    5.32 -To get newest changes to the repository, run
    5.33 -# cd xeno-1.2.bk
    5.34 -# bk pull
    5.35 -
    5.36 -
    5.37 -Configuring Xen
    5.38 -=============================
    5.39 -
    5.40 -Xen's build configuration is managed via a set of environment
    5.41 -variables. These should be set before invoking make 
    5.42 -(e.g., 'export debug=y; make', 'debug=y make').
    5.43 -
    5.44 -The options that can be configured are as follows (all options default
    5.45 -to 'n' or off):
    5.46 -
    5.47 - debug=y    -- Enable debug assertions and console output.
    5.48 -               (Primarily useful for tracing bugs in Xen).
    5.49 -
    5.50 - debugger=y -- Enable the in-Xen pervasive debugger (PDB).
    5.51 -               This can be used to debug Xen, guest OSes, and
    5.52 -               applications. For more information see the 
    5.53 -               XenDebugger-HOWTO.
    5.54 -
    5.55 - perfc=y    -- Enable performance-counters for significant events
    5.56 -               within Xen. The counts can be reset or displayed
    5.57 -               on Xen's console via console control keys.
    5.58 -
    5.59 - trace=y    -- Enable per-cpu trace buffers which log a range of
    5.60 -               events within Xen for collection by control
    5.61 -               software.
    5.62 -
    5.63 -
    5.64 -Build Xen
    5.65 -=============================
    5.66 -
    5.67 -Hint: To see how to build Xen and all the control tools, inspect the
    5.68 -tools/misc/xen-clone script in the BK repository. This script can be
    5.69 -used to clone the repository and perform a full build.
    5.70 -
    5.71 -To build Xen manually:
    5.72 -
    5.73 -# cd xeno-1.2.bk/xen
    5.74 -# make clean
    5.75 -# make
    5.76 -
    5.77 -This will (should) produce a file called 'xen' in the current
    5.78 -directory.  This is the ELF 32-bit LSB executable file of Xen.  You
    5.79 -can also find a gzip version, named 'xen.gz'.
    5.80 -
    5.81 -To install the built files on your server under /usr, type 'make
    5.82 -install' at the root of the BK repository. You will need to be root to
    5.83 -do this!
    5.84 -
    5.85 -Hint: There is also a 'make dist' rule which copies built files to an
    5.86 -install directory just outside the BK repo; if this suits your setup,
    5.87 -go for it.
    5.88 -
    5.89 -
    5.90 -Build Linux as a Xen guest OS
    5.91 -==============================
    5.92 -
    5.93 -This is a little more involved since the repository only contains a
    5.94 -"sparse" tree -- this is essentially an 'overlay' on a standard linux
    5.95 -kernel source tree. It contains only those files currently 'in play'
    5.96 -which are either modified versions of files in the vanilla linux tree,
    5.97 -or brand new files specific to the Xen port.
    5.98 -
    5.99 -So, first you need a vanilla linux-2.4.26 tree, which is located at:
   5.100 -http://www.kernel.org/pub/linux/kernel/v2.4
   5.101 -
   5.102 -Then:
   5.103 -  # mv linux-2.4.26.tar.gz /xeno-1.2.bk
   5.104 -  # cd /xeno-1.2.bk
   5.105 -  # tar -zxvf linux-2.4.26.tar.gz
   5.106 -
   5.107 -You'll find a new directory 'linux-2.4.26' which contains all
   5.108 -the vanilla Linux 2.4.26 kernel source codes.
   5.109 -
   5.110 -Hint: You should choose the vanilla linux kernel tree that has the
   5.111 -same version as the "sparse" tree.
   5.112 -
   5.113 -Next, you need to 'overlay' this sparse tree on the full vanilla Linux
   5.114 -kernel tree:
   5.115 -
   5.116 -  # cd /xeno-1.2.bk/xenolinux-2.4.26-sparse
   5.117 -  # ./mkbuildtree ../linux-2.4.26
   5.118 -
   5.119 -Finally, rename the buildtree since it is now a 'xenolinux' buildtree. 
   5.120 -
   5.121 -  # cd /xeno-1.2.bk
   5.122 -  # mv linux-2.4.26 xenolinux-2.4.26
   5.123 -
   5.124 -Now that the buildtree is there, you can build the xenolinux kernel.
   5.125 -The default configuration should work fine for most people (use 'make
   5.126 -oldconfig') but you can customise using one of the other config tools
   5.127 -if you want.
   5.128 -
   5.129 -  # cd /xeno-1.2.bk/xenolinux-2.4.26
   5.130 -  # ARCH=xen make oldconfig   { or menuconfig, or xconfig, or config }  
   5.131 -  # ARCH=xen make dep
   5.132 -  # ARCH=xen make bzImage
   5.133 -
   5.134 -Assuming the build works, you'll end up with
   5.135 -/xeno-1.2.bk/xenolinux-2.4.26/arch/xen/boot/xenolinux.gz. This is the
   5.136 -gzip version of XenoLinux kernel image.
   5.137 -
   5.138 -
   5.139 -Build the Domain Control Tools
   5.140 -==============================
   5.141 -
   5.142 -Under '/xeno-1.2.bk/tools', there are three sub-directories:
   5.143 -'balloon', 'xc' and 'misc', each containing
   5.144 -a group of tools. You can enter any of the four sub-directories
   5.145 -and type 'make' to compile the corresponding group of tools.
   5.146 -Or you can type 'make' under '/xeno-1.2.bk/tools' to compile
   5.147 -all the tools.
   5.148 -
   5.149 -In order to compile the control-interface library in 'xc' you must
   5.150 -have zlib and development headers installed. Also you will need at
   5.151 -least Python v2.2. 
   5.152 -
   5.153 -'make install' in the tools directory will place executables and
   5.154 -libraries in /usr/bin and /usr/lib. You will need to be root to do this!
   5.155 -
   5.156 -As noted earlier, 'make dist' installs files to a local 'install'
   5.157 -directory just outside the BK repository. These files will then need
   5.158 -to be installed manually onto the server.
   5.159 -
   5.160 -The Example Scripts
   5.161 -===================
   5.162 -
   5.163 -The scripts in tools/examples/ are generally useful for
   5.164 -administering a Xen-based system.  You can install them by running
   5.165 -'make install' in that directory.
   5.166 -
   5.167 -The python scripts (*.py) are the main tools for controlling
   5.168 -Xen domains.
   5.169 -
   5.170 -'defaults' and 'democd' are example configuration files for starting
   5.171 -new domains.
   5.172 -
   5.173 -'xendomains' is a Sys-V style init script for starting and stopping
   5.174 -Xen domains when the system boots / shuts down.
   5.175 -
   5.176 -These will be discussed below in more detail.
   5.177 -
   5.178 -
   5.179 -Installation
   5.180 -==============================
   5.181 -
   5.182 -First:
   5.183 -# cp /xen-1.2.bk/xen/xen.gz /boot/xen.gz
   5.184 -# cp /xen-1.2.bk/xenolinux-2.4.26/arch/xen/boot/xenolinux.gz /boot/xenolinux.gz
   5.185 -
   5.186 -Second, you must have 'GNU Grub' installed. Then you need to edit
   5.187 -the Grub configuration file '/boot/grub/menu.lst'.
   5.188 -
   5.189 -A typical Grub menu option might look like:
   5.190 -
   5.191 -title Xen 1.2 / XenoLinux 2.4.26
   5.192 -        kernel /boot/xen.gz dom0_mem=131072 com1=115200,8n1 noht
   5.193 -        module /boot/xenolinux.gz root=/dev/sda4 ro console=tty0
   5.194 -
   5.195 -The first line specifies which Xen image to use, and what command line
   5.196 -arguments to pass to Xen. In this case we set the maximum amount of
   5.197 -memory to allocate to domain0, and enable serial I/O at 115200 baud.
   5.198 -We could also disable smp support (nosmp) or disable hyper-threading
   5.199 -support (noht). If you have multiple network interface you can use
   5.200 -ifname=ethXX to select which one to use. If your network card is
   5.201 -unsupported, use ifname=dummy
   5.202 -
   5.203 -The second line specifies which XenoLinux image to use, and the
   5.204 -standard linux command line arguments to pass to the kernel. In this
   5.205 -case, we're configuring the root partition and stating that it should
   5.206 -(initially) be mounted read-only (normal practice). 
   5.207 -
   5.208 -The following is a list of command line arguments to pass to Xen:
   5.209 -
   5.210 - ignorebiostables Disable parsing of BIOS-supplied tables. This may
   5.211 -                  help with some chipsets that aren't fully supported
   5.212 -                  by Xen. If you specify this option then ACPI tables are
   5.213 -                  also ignored, and SMP support is disabled.
   5.214 -
   5.215 - noreboot         Don't reboot the machine automatically on errors.
   5.216 -                  This is useful to catch debug output if you aren't
   5.217 -                  catching console messages via the serial line.
   5.218 -
   5.219 - nosmp            Disable SMP support.
   5.220 -                  This option is implied by 'ignorebiostables'.
   5.221 -
   5.222 - noacpi           Disable ACPI tables, which confuse Xen on some chipsets.
   5.223 -                  This option is implied by 'ignorebiostables'.
   5.224 -
   5.225 - watchdog         Enable NMI watchdog which can report certain failures.
   5.226 -
   5.227 - noht             Disable Hyperthreading.
   5.228 -
   5.229 - badpage=<page number>[,<page number>]*
   5.230 -                  Specify a list of pages not to be allocated for use 
   5.231 -                  because they contain bad bytes. For example, if your
   5.232 -                  memory tester says that byte 0x12345678 is bad, you would
   5.233 -                  place 'badpage=0x12345' on Xen's command line (i.e., the
   5.234 -                  last three digits of the byte address are not included!).
   5.235 -
   5.236 - com1=<baud>,DPS[,<io_base>,<irq>]
   5.237 - com2=<baud>,DPS[,<io_base>,<irq>]
   5.238 -                  Xen supports up to two 16550-compatible serial ports.
   5.239 -                  For example: 'com1=9600,8n1,0x408,5' maps COM1 to a
   5.240 -                  9600-baud port, 8 data bits, no parity, 1 stop bit,
   5.241 -                  I/O port base 0x408, IRQ 5.
   5.242 -                  If the I/O base and IRQ are standard (com1:0x3f8,4;
   5.243 -                  com2:0x2f8,3) then they need not be specified.
   5.244 -
   5.245 - console=<specifier list>
   5.246 -                  Specify the destination for Xen console I/O.
   5.247 -                  This is a comma-separated list of, for example:
   5.248 -                   vga:  use VGA console and allow keyboard input
   5.249 -                   com1: use serial port com1
   5.250 -                   com2H: use serial port com2. Transmitted chars will
   5.251 -                          have the MSB set. Received chars must have
   5.252 -                          MSB set.
   5.253 -                   com2L: use serial port com2. Transmitted chars will
   5.254 -                          have the MSB cleared. Received chars must
   5.255 -                          have MSB cleared.
   5.256 -                  The latter two examples allow a single port to be
   5.257 -                  shared by two subsystems (eg. console and
   5.258 -                  debugger). Sharing is controlled by MSB of each
   5.259 -                  transmitted/received character.
   5.260 - [NB. Default for this option is 'com1,vga']
   5.261 -
   5.262 - conswitch=<switch-char><auto-switch-char>
   5.263 -                  Specify how to switch serial-console input between
   5.264 -                  Xen and DOM0. The required sequence is CTRL-<switch_char>
   5.265 -                  pressed three times. Specifying '`' disables switching.
   5.266 -                  The <auto-switch-char> specifies whether Xen should
   5.267 -                  auto-switch input to DOM0 when it boots -- if it is 'x'
   5.268 -                  then auto-switching is disabled. Any other value, or
   5.269 -                  omitting the character, enables auto-switching.
   5.270 - [NB. Default for this option is 'a']
   5.271 -
   5.272 - nmi=<nmi-error-behaviour>
   5.273 -                  Specify what to do with an NMI parity or I/O error.
   5.274 -                  'nmi=fatal':  Xen prints a diagnostic and then hangs.
   5.275 -                  'nmi=dom0':   Inform DOM0 of the NMI.
   5.276 -                  'nmi=ignore': Ignore the NMI.
   5.277 - [NB. Default is 'dom0' ('fatal' for debug builds).]
   5.278 -
   5.279 - dom0_mem=xxx     Set the maximum amount of memory for domain0.
   5.280 -
   5.281 - tbuf_size=xxx    Set the size of the per-cpu trace buffers, in pages
   5.282 -                  (default 1).  Note that the trace buffers are only
   5.283 -                  enabled in debug builds.  Most users can ignore
   5.284 -                  this feature completely.
   5.285 -
   5.286 - sched=xxx        Select the CPU scheduler Xen should use.  The current
   5.287 -                  possibilities are 'bvt', 'atropos' and 'rrobin'.  The
   5.288 -                  default is 'bvt'.  For more information see
   5.289 -                  Sched-HOWTO.txt.
   5.290 -
   5.291 -Boot into Domain 0
   5.292 -==============================
   5.293 -
   5.294 -Reboot your computer; After selecting the kernel to boot, stand back
   5.295 -and watch Xen boot, closely followed by "domain 0" running the
   5.296 -XenoLinux kernel.  Depending on which root partition you have assigned
   5.297 -to XenoLinux kernel in Grub configuration file, you can use the
   5.298 -corresponding username / password to log in.
   5.299 -
   5.300 -Once logged in, it should look just like any regular linux box. All
   5.301 -the usual tools and commands should work as per usual.
   5.302 -
   5.303 -
   5.304 -Start New Domains
   5.305 -==============================
   5.306 -
   5.307 -You must be 'root' to start new domains.
   5.308 -
   5.309 -Make sure you have successfully configured at least one
   5.310 -physical network interface. Then:
   5.311 -
   5.312 -# xen_nat_enable
   5.313 -
   5.314 -The xc_dom_create.py program is useful for starting Xen domains.
   5.315 -You can specify configuration files using the -f switch on the command
   5.316 -line.  The default configuration is in /etc/xc/defaults.  You can
   5.317 -create custom versions of this to suit your local configuration.
   5.318 -
   5.319 -You can override the settings in a configuration file using command
   5.320 -line arguments to xc_dom_create.py.  However, you may find it simplest
   5.321 -to create a separate configuration file for each domain you start.
   5.322 -
   5.323 -xc_dom_create.py will print the local TCP port to which you should
   5.324 -connect to perform console I/O. A suitable console client is provided
   5.325 -by the Python module xenctl.console_client: running this module from
   5.326 -the command line with <host> and <port> parameters will start a
   5.327 -terminal session. This module is also installed as /usr/bin/xencons,
   5.328 -from a copy in tools/misc/xencons.  An alternative to manually running
   5.329 -a terminal client is to specify '-c' to xc_dom_create.py, or add
   5.330 -'auto_console=True' to the defaults file. This will cause
   5.331 -xc_dom_create.py to automatically become the console terminal after
   5.332 -starting the domain.
   5.333 -
   5.334 -Boot-time output will be directed to this console by default, because
   5.335 -the console name is tty0. It is also possible to log in via the
   5.336 -virtual console --- once again, your normal startup scripts will work
   5.337 -as normal (e.g., by running mingetty on tty1-7).  The device node to
   5.338 -which the virtual console is attached can be configured by specifying
   5.339 -'xencons=' on the OS command line: 
   5.340 - 'xencons=off' --> disable virtual console
   5.341 - 'xencons=tty' --> attach console to /dev/tty1 (tty0 at boot-time)
   5.342 - 'xencons=ttyS' --> attach console to /dev/ttyS0
   5.343 -
   5.344 -
   5.345 -Manage Running Domains
   5.346 -==============================
   5.347 -
   5.348 -You can see a list of existing domains with:
   5.349 -# xc_dom_control.py list
   5.350 -
   5.351 -In order to stop a domain, you use:
   5.352 -# xc_dom_control.py stop <domain_id>
   5.353 -
   5.354 -To shutdown a domain cleanly use:
   5.355 -# xc_dom_control.py shutdown <domain_id>
   5.356 -
   5.357 -To destroy a domain immediately:
   5.358 -# xc_dom_control.py destroy <domain_id>
   5.359 -
   5.360 -There are other more advanced options, including pinning domains to
   5.361 -specific CPUs and saving / resuming domains to / from disk files.  To
   5.362 -get more information, run the tool without any arguments:
   5.363 -# xc_dom_control.py
   5.364 -
   5.365 -There is more information available in the Xen README files, the
   5.366 -VBD-HOWTO and the contributed FAQ / HOWTO documents on the web.
   5.367 -
   5.368 -
   5.369 -Other Control Tasks using Python
   5.370 -================================
   5.371 -
   5.372 -A Python module 'Xc' is installed as part of the tools-install
   5.373 -process. This can be imported, and an 'xc object' instantiated, to
   5.374 -provide access to privileged command operations:
   5.375 -
   5.376 -# import Xc
   5.377 -# xc = Xc.new()
   5.378 -# dir(xc)
   5.379 -# help(xc.domain_create)
   5.380 -
   5.381 -In this way you can see that the class 'xc' contains useful
   5.382 -documentation for you to consult.
   5.383 -
   5.384 -A further package of useful routines (xenctl) is also installed:
   5.385 -
   5.386 -# import xenctl.utils
   5.387 -# help(xenctl.utils)
   5.388 -
   5.389 -You can use these modules to write your own custom scripts or you can
   5.390 -customise the scripts supplied in the Xen distribution.
   5.391 -
   5.392 -
   5.393 -Automatically start / stop domains at boot / shutdown
   5.394 -=====================================================
   5.395 -
   5.396 -A Sys-V style init script for RedHat systems is provided in
   5.397 -tools/examples/xendomains.  When you run 'make install' in that
   5.398 -directory, it should be automatically copied to /etc/init.d/.  You can
   5.399 -then enable it using the chkconfig command, e.g.:
   5.400 -
   5.401 -# chkconfig --add xendomains
   5.402 -
   5.403 -By default, this will start the boot-time domains in runlevels 3, 4
   5.404 -and 5.  To specify a domain is to start at boot-time, place its
   5.405 -configuration file (or a link to it) under /etc/xc/auto/.
   5.406 -
   5.407 -The script will also stop ALL domains when the system is shut down,
   5.408 -even domains that it did not start originally.
   5.409 -
   5.410 -You can also use the "service" command (part of the RedHat standard
   5.411 -distribution) to run this script manually, e.g:
   5.412 -
   5.413 -# service xendomains start
   5.414 -
   5.415 -Starts all the domains with config files under /etc/xc/auto/.
   5.416 -
   5.417 -# service xendomains stop
   5.418 -
   5.419 -Shuts down ALL running Xen domains.
     6.1 --- a/docs/user.tex	Mon Oct 25 10:31:11 2004 +0000
     6.2 +++ b/docs/user.tex	Mon Oct 25 12:45:34 2004 +0000
     6.3 @@ -1059,8 +1059,15 @@ substituting the appropriate scheduler n
     6.4  and their parameters are included below; future versions of the tools
     6.5  will provide a higher-level interface to these tools.
     6.6  
     6.7 +It is expected that system administrators configure their system to
     6.8 +use the scheduler most appropriate to their needs.  Currently, the BVT
     6.9 +scheduler is the recommended choice, since the Atropos scheduler is
    6.10 +not finished.
    6.11 +
    6.12  \section{Borrowed Virtual Time}
    6.13  
    6.14 +{\tt sched=bvt } (the default) \\ 
    6.15 +
    6.16  BVT provides proportional fair shares of the CPU time.  It has been
    6.17  observed to penalise domains that block frequently (e.g. IO intensive
    6.18  domains), but this can be compensated by using warping. 
    6.19 @@ -1092,26 +1099,28 @@ domains), but this can be compensated by
    6.20    run unwarped for before it can warp again
    6.21  \end{description}
    6.22  
    6.23 -\section{Fair Borrowed Virtual Time}
    6.24 -
    6.25 -This is a derivative for BVT that aims to provide better fairness for
    6.26 -IO intensive domains as well as for CPU intensive domains.
    6.27 -
    6.28 -\subsection{Global Parameters}
    6.29 +\section{Atropos}
    6.30  
    6.31 -Same as for BVT.
    6.32 -
    6.33 -\subsection{Per-domain parameters}
    6.34 -
    6.35 -Same as for BVT.
    6.36 -
    6.37 -\section{Atropos}
    6.38 +{\tt sched=atropos } \\
    6.39  
    6.40  Atropos is a Soft Real Time scheduler.  It provides guarantees about
    6.41  absolute shares of the CPU (with a method for optionally sharing out
    6.42  slack CPU time on a best-effort basis) and can provide timeliness
    6.43  guarantees for latency-sensitive domains.
    6.44  
    6.45 +Every domain has an associated period and slice.  The domain should
    6.46 +receive 'slice' nanoseconds every 'period' nanoseconds.  This allows
    6.47 +the administrator to configure both the absolute share of the CPU a
    6.48 +domain receives and the frequency with which it is scheduled.  When
    6.49 +domains unblock, their period is reduced to the value of the latency
    6.50 +hint (the slice is scaled accordingly so that they still get the same
    6.51 +proportion of the CPU).  For each subsequent period, the slice and
    6.52 +period times are doubled until they reach their original values.
    6.53 +
    6.54 +Note: don't overcommit the CPU when using Atropos (i.e. don't reserve
    6.55 +more CPU than is available - the utilisation should be kept to
    6.56 +slightly less than 100% in order to ensure predictable behaviour).
    6.57 +
    6.58  \subsection{Per-domain parameters}
    6.59  
    6.60  \begin{description}
    6.61 @@ -1130,6 +1139,8 @@ guarantees for latency-sensitive domains
    6.62  
    6.63  \section{Round Robin}
    6.64  
    6.65 +{\tt sched=rrobin } \\
    6.66 +
    6.67  The Round Robin scheduler is included as a simple demonstration of
    6.68  Xen's internal scheduler API.  It is not intended for production use
    6.69  --- the other schedulers included are all more general and should give
    6.70 @@ -1562,3 +1573,31 @@ simply copying the image file.  Once thi
    6.71  image-specific settings (hostname, network settings, etc).
    6.72  
    6.73  \end{document}
    6.74 +
    6.75 +
    6.76 +%% Other stuff without a home
    6.77 +
    6.78 +%% Instructions Re Python API
    6.79 +
    6.80 +%% Other Control Tasks using Python
    6.81 +%% ================================
    6.82 +
    6.83 +%% A Python module 'Xc' is installed as part of the tools-install
    6.84 +%% process. This can be imported, and an 'xc object' instantiated, to
    6.85 +%% provide access to privileged command operations:
    6.86 +
    6.87 +%% # import Xc
    6.88 +%% # xc = Xc.new()
    6.89 +%% # dir(xc)
    6.90 +%% # help(xc.domain_create)
    6.91 +
    6.92 +%% In this way you can see that the class 'xc' contains useful
    6.93 +%% documentation for you to consult.
    6.94 +
    6.95 +%% A further package of useful routines (xenctl) is also installed:
    6.96 +
    6.97 +%% # import xenctl.utils
    6.98 +%% # help(xenctl.utils)
    6.99 +
   6.100 +%% You can use these modules to write your own custom scripts or you can
   6.101 +%% customise the scripts supplied in the Xen distribution.