ia64/xen-unstable

changeset 1445:1d1e0a1795b8

bitkeeper revision 1.945 (40c72fd2I_g1_WTlzwSBghpabzfPew)

Documentation updates.
author kaf24@scramble.cl.cam.ac.uk
date Wed Jun 09 15:42:10 2004 +0000 (2004-06-09)
parents e66d4ccb01af
children f3123052268f
files .rootkeys docs/Console-HOWTO.txt docs/HOWTOs/Console-HOWTO docs/HOWTOs/Sched-HOWTO docs/HOWTOs/VBD-HOWTO docs/HOWTOs/Xen-HOWTO docs/HOWTOs/XenDebugger-HOWTO docs/Sched-HOWTO.txt docs/VBD-HOWTO.txt docs/Xen-HOWTO.txt docs/pdb.txt
line diff
     1.1 --- a/.rootkeys	Wed Jun 09 12:41:57 2004 +0000
     1.2 +++ b/.rootkeys	Wed Jun 09 15:42:10 2004 +0000
     1.3 @@ -6,15 +6,15 @@ 3eb788d6Kleck_Cut0ouGneviGzliQ Makefile
     1.4  3f5ef5a24IaQasQE2tyMxrfxskMmvw README
     1.5  3f5ef5a2l4kfBYSQTUaOyyD76WROZQ README.CD
     1.6  3f69d8abYB1vMyD_QVDvzxy5Zscf1A TODO
     1.7 -405ef604hIZH5pGi2uwlrlSvUMrutw docs/Console-HOWTO.txt
     1.8 +405ef604hIZH5pGi2uwlrlSvUMrutw docs/HOWTOs/Console-HOWTO
     1.9 +4083e798FbE1MIsQaIYvjnx1uvFhBg docs/HOWTOs/Sched-HOWTO
    1.10 +40083bb4LVQzRqA3ABz0__pPhGNwtA docs/HOWTOs/VBD-HOWTO
    1.11 +4021053fmeFrEyPHcT8JFiDpLNgtHQ docs/HOWTOs/Xen-HOWTO
    1.12 +4022a73cgxX1ryj1HgS-IwwB6NUi2A docs/HOWTOs/XenDebugger-HOWTO
    1.13  3f9e7d53iC47UnlfORp9iC1vai6kWw docs/Makefile
    1.14 -4083e798FbE1MIsQaIYvjnx1uvFhBg docs/Sched-HOWTO.txt
    1.15 -40083bb4LVQzRqA3ABz0__pPhGNwtA docs/VBD-HOWTO.txt
    1.16 -4021053fmeFrEyPHcT8JFiDpLNgtHQ docs/Xen-HOWTO.txt
    1.17  3f9e7d60PWZJeVh5xdnk0nLUdxlqEA docs/eps/xenlogo.eps
    1.18  3f9e7d63lTwQbp2fnx7yY93epWS-eQ docs/figs/dummy
    1.19  3f9e7d564bWFB-Czjv1qdmE6o0GqNg docs/interface.tex
    1.20 -4022a73cgxX1ryj1HgS-IwwB6NUi2A docs/pdb.txt
    1.21  3f9e7d58t7N6hjjBMxSn-NMxBphchA docs/style.tex
    1.22  3f9e7d5bz8BwYkNuwyiPVu7JJG441A docs/xenstyle.cls
    1.23  3f815144d1vI2777JI-dO4wk49Iw7g extras/mini-os/Makefile
     2.1 --- a/docs/Console-HOWTO.txt	Wed Jun 09 12:41:57 2004 +0000
     2.2 +++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
     2.3 @@ -1,85 +0,0 @@
     2.4 -    New console I/O infrastructure in Xen 1.3
     2.5 -    =========================================
     2.6 -
     2.7 -    Keir Fraser, University of Cambridge, 3rd June 2004
     2.8 -
     2.9 - I thought I'd write a quick note about using the new console I/O
    2.10 - infrastructure in Xen 1.3. Significant new features compared with 1.2,
    2.11 - and with older revisions of 1.3, include:
    2.12 -  - bi-directional console access
    2.13 -  - log in to a Xenolinux guest OS via its virtual console
    2.14 -  - a new terminal client (replaces the use of telnet in character mode)
    2.15 -  - proper handling of terminal emulation
    2.16 -
    2.17 -Accessing the virtual console from within the guest OS
    2.18 -------------------------------------------------------
    2.19 - Every Xenolinux instance owns a bidirectional 'virtual console'.
    2.20 - The device node to which this console is attached can be configured
    2.21 - by specifying 'xencons=' on the OS command line:
    2.22 -  'xencons=off'  --> disable virtual console
    2.23 -  'xencons=tty'  --> attach console to /dev/tty1 (tty0 at boot-time)
    2.24 -  'xencons=ttyS' --> attach console to /dev/ttyS0
    2.25 - The default is to attach to /dev/tty1, and also to create dummy
    2.26 - devices for /dev/tty2-63 to avoid warnings from many standard distro
    2.27 - startup scripts. The exception is domain 0, which by default attaches
    2.28 - to /dev/ttyS0.
    2.29 -
    2.30 -Domain 0 virtual console
    2.31 -------------------------
    2.32 - The virtual console for domain 0 is shared with Xen's console. For
    2.33 - example, if you specify 'console=com1' as a boot parameter to Xen,
    2.34 - then domain 0 will have bi-directional access to the primary serial
    2.35 - line. Boot-time messages can be directed to the virtual console by
    2.36 - specifying 'console=ttyS0' as a boot parameter to Xenolinux.
    2.37 -
    2.38 -Connecting to the virtual console
    2.39 ----------------------------------
    2.40 - Domain 0 console may be accessed using the supplied 'miniterm' program
    2.41 - if raw serial access is desired. If the Xen machine is connected to a
    2.42 - serial-port server, then the supplied 'xencons' program may be used to
    2.43 - connect to the appropriate TCP port on the server:
    2.44 -  # xencons <server host> <server port>
    2.45 -
    2.46 -Logging in via virtual console
    2.47 -------------------------------
    2.48 - It is possible to log in to a guest OS via its virtual console if a
    2.49 - 'getty' is running. In most domains the virtual console is named tty1
    2.50 - so standard startup scripts and /etc/inittab should work
    2.51 - fine. Furthermore, tty2-63 are created as dummy console devices to
    2.52 - suppress warnings from standard startup scripts. If the OS has
    2.53 - attached the virtual console to /dev/ttyS0 then you will need to
    2.54 - start a 'mingetty' on that device node.
    2.55 -
    2.56 -Virtual console for other domains
    2.57 ----------------------------------
    2.58 - Every guest OS has a virtual console that is accessible via
    2.59 - 'console=tty0' at boot time (or 'console=xencons0' for domain 0), and
    2.60 - mingetty running on /dev/tty1 (or /dev/xen/cons for domain 0).
    2.61 - However, domains other than domain 0 do not have access to the
    2.62 - physical serial line. Instead, their console data is sent to and from
    2.63 - a control daemon running in domain 0. When properly installed, this
    2.64 - daemon can be started from the init scripts (e.g., rc.local):
    2.65 -  # /usr/sbin/xend start
    2.66 -
    2.67 - Alternatively, Redhat- and LSB-compatible Linux installations can use
    2.68 - the provided init.d script. To integrate startup and shutdown of xend
    2.69 - in such a system, you will need to run a few configuration commands:
    2.70 -  # chkconfig --add xend
    2.71 -  # chkconfig --level 35 xend on
    2.72 -  # chkconfig --level 01246 xend off
    2.73 - This will avoid the need to run xend manually from rc.local, for example.
    2.74 -
    2.75 - Note that, when a domain is created using xc_dom_create.py, xend MUST
    2.76 - be running. If everything is set up correctly then xc_dom_create will
    2.77 - print the local TCP port to which you should connect to perform
    2.78 - console I/O. A suitable console client is provided by the Python
    2.79 - module xenctl.console_client: running this module from the command
    2.80 - line with <host> and <port> parameters will start a terminal
    2.81 - session. This module is also installed as /usr/bin/xencons, from a
    2.82 - copy in tools/misc/xencons. For example:
    2.83 -  # xencons localhost 9600
    2.84 -
    2.85 - An alternative to manually running a terminal client is to specify
    2.86 - '-c' to xc_dom_create.py, or add 'auto_console=True' to the defaults
    2.87 - file. This will cause xc_dom_create.py to automatically become the
    2.88 - console terminal after starting the domain.
     3.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
     3.2 +++ b/docs/HOWTOs/Console-HOWTO	Wed Jun 09 15:42:10 2004 +0000
     3.3 @@ -0,0 +1,85 @@
     3.4 +    New console I/O infrastructure in Xen 1.3
     3.5 +    =========================================
     3.6 +
     3.7 +    Keir Fraser, University of Cambridge, 3rd June 2004
     3.8 +
     3.9 + I thought I'd write a quick note about using the new console I/O
    3.10 + infrastructure in Xen 1.3. Significant new features compared with 1.2,
    3.11 + and with older revisions of 1.3, include:
    3.12 +  - bi-directional console access
    3.13 +  - log in to a Xenolinux guest OS via its virtual console
    3.14 +  - a new terminal client (replaces the use of telnet in character mode)
    3.15 +  - proper handling of terminal emulation
    3.16 +
    3.17 +Accessing the virtual console from within the guest OS
    3.18 +------------------------------------------------------
    3.19 + Every Xenolinux instance owns a bidirectional 'virtual console'.
    3.20 + The device node to which this console is attached can be configured
    3.21 + by specifying 'xencons=' on the OS command line:
    3.22 +  'xencons=off'  --> disable virtual console
    3.23 +  'xencons=tty'  --> attach console to /dev/tty1 (tty0 at boot-time)
    3.24 +  'xencons=ttyS' --> attach console to /dev/ttyS0
    3.25 + The default is to attach to /dev/tty1, and also to create dummy
    3.26 + devices for /dev/tty2-63 to avoid warnings from many standard distro
    3.27 + startup scripts. The exception is domain 0, which by default attaches
    3.28 + to /dev/ttyS0.
    3.29 +
    3.30 +Domain 0 virtual console
    3.31 +------------------------
    3.32 + The virtual console for domain 0 is shared with Xen's console. For
    3.33 + example, if you specify 'console=com1' as a boot parameter to Xen,
    3.34 + then domain 0 will have bi-directional access to the primary serial
    3.35 + line. Boot-time messages can be directed to the virtual console by
    3.36 + specifying 'console=ttyS0' as a boot parameter to Xenolinux.
    3.37 +
    3.38 +Connecting to the virtual console
    3.39 +---------------------------------
    3.40 + Domain 0 console may be accessed using the supplied 'miniterm' program
    3.41 + if raw serial access is desired. If the Xen machine is connected to a
    3.42 + serial-port server, then the supplied 'xencons' program may be used to
    3.43 + connect to the appropriate TCP port on the server:
    3.44 +  # xencons <server host> <server port>
    3.45 +
    3.46 +Logging in via virtual console
    3.47 +------------------------------
    3.48 + It is possible to log in to a guest OS via its virtual console if a
    3.49 + 'getty' is running. In most domains the virtual console is named tty1
    3.50 + so standard startup scripts and /etc/inittab should work
    3.51 + fine. Furthermore, tty2-63 are created as dummy console devices to
    3.52 + suppress warnings from standard startup scripts. If the OS has
    3.53 + attached the virtual console to /dev/ttyS0 then you will need to
    3.54 + start a 'mingetty' on that device node.
    3.55 +
    3.56 +Virtual console for other domains
    3.57 +---------------------------------
    3.58 + Every guest OS has a virtual console that is accessible via
    3.59 + 'console=tty0' at boot time (or 'console=xencons0' for domain 0), and
    3.60 + mingetty running on /dev/tty1 (or /dev/xen/cons for domain 0).
    3.61 + However, domains other than domain 0 do not have access to the
    3.62 + physical serial line. Instead, their console data is sent to and from
    3.63 + a control daemon running in domain 0. When properly installed, this
    3.64 + daemon can be started from the init scripts (e.g., rc.local):
    3.65 +  # /usr/sbin/xend start
    3.66 +
    3.67 + Alternatively, Redhat- and LSB-compatible Linux installations can use
    3.68 + the provided init.d script. To integrate startup and shutdown of xend
    3.69 + in such a system, you will need to run a few configuration commands:
    3.70 +  # chkconfig --add xend
    3.71 +  # chkconfig --level 35 xend on
    3.72 +  # chkconfig --level 01246 xend off
    3.73 + This will avoid the need to run xend manually from rc.local, for example.
    3.74 +
    3.75 + Note that, when a domain is created using xc_dom_create.py, xend MUST
    3.76 + be running. If everything is set up correctly then xc_dom_create will
    3.77 + print the local TCP port to which you should connect to perform
    3.78 + console I/O. A suitable console client is provided by the Python
    3.79 + module xenctl.console_client: running this module from the command
    3.80 + line with <host> and <port> parameters will start a terminal
    3.81 + session. This module is also installed as /usr/bin/xencons, from a
    3.82 + copy in tools/misc/xencons. For example:
    3.83 +  # xencons localhost 9600
    3.84 +
    3.85 + An alternative to manually running a terminal client is to specify
    3.86 + '-c' to xc_dom_create.py, or add 'auto_console=True' to the defaults
    3.87 + file. This will cause xc_dom_create.py to automatically become the
    3.88 + console terminal after starting the domain.
     4.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
     4.2 +++ b/docs/HOWTOs/Sched-HOWTO	Wed Jun 09 15:42:10 2004 +0000
     4.3 @@ -0,0 +1,135 @@
     4.4 +Xen Scheduler HOWTO
     4.5 +===================
     4.6 +
     4.7 +by Mark Williamson
     4.8 +(c) 2004 Intel Research Cambridge
     4.9 +
    4.10 +
    4.11 +Introduction
    4.12 +------------
    4.13 +
    4.14 +Xen offers a choice of CPU schedulers.  All available schedulers are
    4.15 +included in Xen at compile time and the administrator may select a
    4.16 +particular scheduler using a boot-time parameter to Xen.  It is
    4.17 +expected that administrators will choose the scheduler most
    4.18 +appropriate to their application and configure the machine to boot
    4.19 +with that scheduler.
    4.20 +
    4.21 +Note: the default scheduler is the Borrowed Virtual Time (BVT)
    4.22 +scheduler which was also used in previous releases of Xen.  No
    4.23 +configuration changes are required to keep using this scheduler.
    4.24 +
    4.25 +This file provides a brief description of the CPU schedulers available
    4.26 +in Xen, what they are useful for and the parameters that are used to
    4.27 +configure them.  This information is necessarily fairly technical at
    4.28 +the moment.  The recommended way to fully understand the scheduling
    4.29 +algorithms is to read the relevant research papers.
    4.30 +
    4.31 +The interface to the schedulers is basically "raw" at the moment,
    4.32 +without sanity checking - administrators should be careful when
    4.33 +setting the parameters since it is possible for a mistake to hang
    4.34 +domains, or the entire system (in particular, double check parameters
    4.35 +for sanity and make sure that DOM0 will get enough CPU time to remain
    4.36 +usable).  Note that xc_dom_control.py takes time values in
    4.37 +nanoseconds.
    4.38 +
    4.39 +Future tools will implement friendlier control interfaces.
    4.40 +
    4.41 +
    4.42 +Borrowed Virtual Time (BVT)
    4.43 +---------------------------
    4.44 +
    4.45 +All releases of Xen have featured the BVT scheduler, which is used to
    4.46 +provide proportional fair shares of the CPU based on weights assigned
    4.47 +to domains.  BVT is "work conserving" - the CPU will never be left
    4.48 +idle if there are runnable tasks.
    4.49 +
    4.50 +BVT uses "virtual time" to make decisions on which domain should be
    4.51 +scheduled on the processor.  Each time a scheduling decision is
    4.52 +required, BVT evaluates the "Effective Virtual Time" of all domains
    4.53 +and then schedules the domain with the least EVT.  Domains are allowed
    4.54 +to "borrow" virtual time by "time warping", which reduces their EVT by
    4.55 +a certain amount, so that they may be scheduled sooner.  In order to
    4.56 +maintain long term fairness, there are limits on when a domain can
    4.57 +time warp and for how long.  [ For more details read the SOSP'99 paper
    4.58 +by Duda and Cheriton ]
    4.59 +
    4.60 +In the Xen implementation, domains time warp when they unblock, so
    4.61 +that domain wakeup latencies are reduced.
    4.62 +
    4.63 +The BVT algorithm uses the following per-domain parameters (set using
    4.64 +xc_dom_control.py cpu_bvtset):
    4.65 +
    4.66 +* mcuadv - the MCU (Minimum Charging Unit) advance determines the
    4.67 +           proportional share of the CPU that a domain receives.  It
    4.68 +           is set inversely proportionally to a domain's sharing weight.
    4.69 +* warp   - the amount of "virtual time" the domain is allowed to warp
    4.70 +           backwards
    4.71 +* warpl  - the warp limit is the maximum time a domain can run warped for
    4.72 +* warpu  - the unwarp requirement is the minimum time a domain must
    4.73 +           run unwarped for before it can warp again
    4.74 +
    4.75 +BVT also has the following global parameter (set using
    4.76 +xc_dom_control.py cpu_bvtslice):
    4.77 +
    4.78 +* ctx_allow - the context switch allowance is similar to the "quantum"
    4.79 +              in traditional schedulers.  It is the minimum time that
    4.80 +              a scheduled domain will be allowed to run before be
    4.81 +              pre-empted.  This prevents thrashing of the CPU.
    4.82 +
    4.83 +BVT can now be selected by passing the 'sched=bvt' argument to Xen at
    4.84 +boot-time and is the default scheduler if no 'sched' argument is
    4.85 +supplied.
    4.86 +
    4.87 +Atropos
    4.88 +-------
    4.89 +
    4.90 +Atropos is a scheduler originally developed for the Nemesis multimedia
    4.91 +operating system.  Atropos can be used to reserve absolute shares of
    4.92 +the CPU.  It also includes some features to improve the efficiency of
    4.93 +domains that block for I/O and to allow spare CPU time to be shared
    4.94 +out.
    4.95 +
    4.96 +The Atropos algorithm has the following parameters for each domain
    4.97 +(set using xc_dom_control.py cpu_atropos_set):
    4.98 +
    4.99 + * slice    - The length of time per period that a domain is guaranteed.
   4.100 + * period   - The period over which a domain is guaranteed to receive
   4.101 +              its slice of CPU time.
   4.102 + * latency  - The latency hint is used to control how soon after
   4.103 +              waking up a domain should be scheduled.
   4.104 + * xtratime - This is a true (1) / false (0) flag that specifies whether
   4.105 +             a domain should be allowed a share of the system slack time.
   4.106 +
   4.107 +Every domain has an associated period and slice.  The domain should
   4.108 +receive 'slice' nanoseconds every 'period' nanoseconds.  This allows
   4.109 +the administrator to configure both the absolute share of the CPU a
   4.110 +domain receives and the frequency with which it is scheduled.  When
   4.111 +domains unblock, their period is reduced to the value of the latency
   4.112 +hint (the slice is scaled accordingly so that they still get the same
   4.113 +proportion of the CPU).  For each subsequent period, the slice and
   4.114 +period times are doubled until they reach their original values.
   4.115 +
   4.116 +Atropos is selected by adding 'sched=atropos' to Xen's boot-time
   4.117 +arguments.
   4.118 +
   4.119 +Note: don't overcommit the CPU when using Atropos (i.e. don't reserve
   4.120 +more CPU than is available - the utilisation should be kept to
   4.121 +slightly less than 100% in order to ensure predictable behaviour).
   4.122 +
   4.123 +Round-Robin
   4.124 +-----------
   4.125 +
   4.126 +The Round-Robin scheduler is provided as a simple example of Xen's
   4.127 +internal scheduler API.  For production systems, one of the other
   4.128 +schedulers should be used, since they are more flexible and more
   4.129 +efficient.
   4.130 +
   4.131 +The Round-robin scheduler has one global parameter (set using
   4.132 +xc_dom_control.py cpu_rrobin_slice):
   4.133 +
   4.134 + * rr_slice - The time for which each domain runs before the next
   4.135 +              scheduling decision is made.
   4.136 +
   4.137 +The Round-Robin scheduler can be selected by adding 'sched=rrobin' to
   4.138 +Xen's boot-time arguments.
     5.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
     5.2 +++ b/docs/HOWTOs/VBD-HOWTO	Wed Jun 09 15:42:10 2004 +0000
     5.3 @@ -0,0 +1,437 @@
     5.4 +Virtual Block Devices / Virtual Disks in Xen - HOWTO
     5.5 +====================================================
     5.6 +
     5.7 +HOWTO for Xen 1.2
     5.8 +
     5.9 +Mark A. Williamson (mark.a.williamson@intel.com)
    5.10 +(C) Intel Research Cambridge 2004
    5.11 +
    5.12 +Introduction
    5.13 +------------
    5.14 +
    5.15 +This document describes the new Virtual Block Device (VBD) and Virtual Disk
    5.16 +features available in Xen release 1.2.  First, a brief introduction to some
    5.17 +basic disk concepts on a Xen system:
    5.18 +
    5.19 +Virtual Block Devices (VBDs):
    5.20 +	VBDs are the disk abstraction provided by Xen.  All XenoLinux disk accesses
    5.21 +	go through the VBD driver.  Using the VBD functionality, it is possible
    5.22 +	to selectively grant domains access to portions of the physical disks
    5.23 +	in the system.
    5.24 +
    5.25 +	A virtual block device can also consist of multiple extents from the
    5.26 +	physical disks in the system, allowing them to be accessed as a single
    5.27 +	uniform device from the domain with access to that VBD.  The
    5.28 +	functionality is somewhat similar to that underpinning LVM, since
    5.29 +	you can combine multiple regions from physical devices into a single
    5.30 +	logical device, from the point of view of a guest virtual machine.
    5.31 +
    5.32 +	Everyone who boots Xen / XenoLinux from a hard drive uses VBDs
    5.33 +	but for some uses they can almost be ignored.
    5.34 +
    5.35 +Virtual Disks (VDs):
    5.36 +	VDs are an abstraction built on top of the functionality provided by
    5.37 +	VBDs.  The VD management code maintains a "free pool" of disk space on
    5.38 +	the system that has been reserved for use with VDs.  The tools can
    5.39 +	automatically allocate collections of extents from this free pool to
    5.40 +	create "virtual disks" on demand.
    5.41 +
    5.42 +	VDs can then be used just like normal disks by domains.  VDs appear
    5.43 +	just like any other disk to guest domains, since they use the same VBD
    5.44 +	abstraction, as provided by Xen.
    5.45 +
    5.46 +	Using VDs is optional, since it's always possible to dedicate
    5.47 +	partitions, or entire disks to your virtual machines.  VDs are handy
    5.48 +	when you have a dynamically changing set of virtual machines and you
    5.49 +	don't want to have to keep repartitioning in order to provide them with
    5.50 +	disk space.
    5.51 +
    5.52 +	Virtual Disks are rather like "logical volumes" in LVM.
    5.53 +
    5.54 +If that didn't all make sense, it doesn't matter too much ;-)  Using the
    5.55 +functionality is fairly straightforward and some examples will clarify things.
    5.56 +The text below expands a bit on the concepts involved, finishing up with a
    5.57 +walk-through of some simple virtual disk management tasks.
    5.58 +
    5.59 +
    5.60 +Virtual Block Devices
    5.61 +---------------------
    5.62 +
    5.63 +Before covering VD management, it's worth discussing some aspects of the VBD
    5.64 +functionality that will be useful to know.
    5.65 +
    5.66 +A VBD is made up of a number of extents from physical disk devices.  The
    5.67 +extents for a VBD don't have to be contiguous, or even on the same device.  Xen
    5.68 +performs address translation so that they appear as a single contiguous
    5.69 +device to a domain.
    5.70 +
    5.71 +When the VBD layer is used to give access to entire drives or entire
    5.72 +partitions, the VBDs simply consist of a single extent that corresponds to the
    5.73 +drive or partition used.  Lists of extents are usually only used when virtual
    5.74 +disks (VDs) are being used.
    5.75 +
    5.76 +Xen 1.2 and its associated XenoLinux release support automatic registration /
    5.77 +removal of VBDs.  It has always been possible to add a VBD to a running
    5.78 +XenoLinux domain but it was then necessary to run the "xen_vbd_refresh" tool in
    5.79 +order for the new device to be detected.  Nowadays, when a VBD is added, the
    5.80 +domain it's added to automatically registers the disk, with no special action
    5.81 +by the user being required.
    5.82 +
    5.83 +Note that it is possible to use the VBD functionality to allow multiple domains
    5.84 +write access to the same areas of disk.  This is almost always a bad thing!
    5.85 +The provided example scripts for creating domains do their best to check that
    5.86 +disk areas are not shared unsafely and will catch many cases of this.  Setting
    5.87 +the vbd_expert variable in config files for xc_dom_create.py controls how
    5.88 +unsafe it allows VBD mappings to be - 0 (read only sharing allowed) should be
    5.89 +right for most people ;-).  Level 1 attempts to allow at most one writer to any
    5.90 +area of disk.  Level 2 allows multiple writers (i.e. anything!).
    5.91 +
    5.92 +
    5.93 +Virtual Disk Management
    5.94 +-----------------------
    5.95 +
    5.96 +The VD management code runs entirely in user space.  The code is written in
    5.97 +Python and can therefore be accessed from custom scripts, as well as from the
    5.98 +convenience scripts provided.  The underlying VD database is a SQLite database
    5.99 +in /var/db/xen_vdisks.sqlite.
   5.100 +
   5.101 +Most virtual disk management can be performed using the xc_vd_tool.py script
   5.102 +provided in the tools/examples/ directory of the source tree.  It supports the
   5.103 +following operations:
   5.104 +
   5.105 +initialise -	     "Formats" a partition or disk device for use storing
   5.106 +		     virtual disks.  This does not actually write data to the
   5.107 +		     specified device.  Rather, it adds the device to the VD
   5.108 +		     free-space pool, for later allocation.
   5.109 +
   5.110 +		     You should only add devices that correspond directly to
   5.111 +		     physical disks / partitions - trying to use a VBD that you
   5.112 +		     have created yourself as part of the free space pool has
   5.113 +		     undefined (possibly nasty) results.
   5.114 +
   5.115 +create -	     Creates a virtual disk of specified size by allocating space
   5.116 +		     from the free space pool.  The virtual disk is identified
   5.117 +		     in future by the unique ID returned by this script.
   5.118 +
   5.119 +		     The disk can be given an expiry time, if desired.  For
   5.120 +		     most users, the best idea is to specify a time of 0 (which
   5.121 +		     has the special meaning "never expire") and then
   5.122 +		     explicitly delete the VD when finished with it -
   5.123 +		     otherwise, VDs will disappear if allowed to expire.
   5.124 +
   5.125 +delete -	     Explicitly delete a VD.  Makes it disappear immediately!
   5.126 +
   5.127 +setexpiry -	     Allows the expiry time of a (not yet expired) virtual disk
   5.128 +		     to be modified.  Be aware the VD will disappear when the
   5.129 +		     time has expired.
   5.130 +
   5.131 +enlarge -            Increase the allocation of space to a virtual disk.
   5.132 +		     Currently this will not be immediately visible to running
   5.133 +		     domain(s) using it.  You can make it visible by destroying
   5.134 +		     the corresponding VBDs and then using xc_dom_control.py to
   5.135 +		     add them to the domain again.  Note: doing this to
   5.136 +		     filesystems that are in use may well cause errors in the
   5.137 +		     guest Linux, or even a crash although it will probably be
   5.138 +		     OK if you stop the domain before updating the VBD and
   5.139 +		     restart afterwards.
   5.140 +
   5.141 +import -	     Allocate a virtual disk and populate it with the contents of
   5.142 +		     some disk file.  This can be used to import root file system
   5.143 +		     images or to restore backups of virtual disks, for instance.
   5.144 +
   5.145 +export -	     Write the contents of a virtual disk out to a disk file.
   5.146 +		     Useful for creating disk images for use elsewhere, such as
   5.147 +		     standard root file systems and backups.
   5.148 +
   5.149 +list -		     List the non-expired virtual disks currently available in the
   5.150 +		     system.
   5.151 +
   5.152 +undelete -	     Attempts to recover an expired (or deleted) virtual disk.
   5.153 +
   5.154 +freespace -	     Get the free space (in megabytes) available for allocating
   5.155 +		     new virtual disk extents.
   5.156 +
   5.157 +The functionality provided by these scripts is also available directly from
   5.158 +Python functions in the xenctl.utils module - you can use this functionality in
   5.159 +your own scripts.
   5.160 +
   5.161 +Populating VDs:
   5.162 +
   5.163 +Once you've created a VD, you might want to populate it from DOM0 (for
   5.164 +instance, to put a root file system onto it for a guest domain).  This can be
   5.165 +done by creating a VBD for dom0 to access the VD through - this is discussed
   5.166 +below.
   5.167 +
   5.168 +More detail on how virtual disks work:
   5.169 +
   5.170 +When you "format" a device for virtual disks, the device is logically split up
   5.171 +into extents.  These extents are recorded in the Virtual Disk Management
   5.172 +database in /var/db/xen_vdisks.sqlite.
   5.173 +
   5.174 +When you use xc_vd_tool.py to add create a virtual disk, some of the extents in
   5.175 +the free space pool are reallocated for that virtual disk and a record for that
   5.176 +VD is added to the database.  When VDs are mapped into domains as VBDs, the
   5.177 +system looks up the allocated extents for the virtual disk in order to set up
   5.178 +the underlying VBD.
   5.179 +
   5.180 +Free space is identified by the fact that it belongs to an "expired" disk.
   5.181 +When "initialising" with xc_vd_tool.py adds a real device to the free pool, it
   5.182 +actually divides the device into extents and adds them to an already-expired
   5.183 +virtual disk.  The allocated device is not written to during this operation -
   5.184 +its availability is simply recorded into the virtual disks database.
   5.185 +
   5.186 +If you set an expiry time on a VD, its extents will be liable to be reallocated
   5.187 +to new VDs as soon as that expiry time runs out.  Therefore, be careful when
   5.188 +setting expiry times!  Many users will find it simplest to set all VDs to not
   5.189 +expire automatically, then explicitly delete them later on.
   5.190 +
   5.191 +Deleted / expired virtual disks may sometimes be undeleted - currently this
   5.192 +only works when none of the virtual disk's extents have been reallocated to
   5.193 +other virtual disks, since that's the only situation where the disk is likely
   5.194 +to be fully intact.  You should try undeletion as soon as you realise you've
   5.195 +mistakenly deleted (or allowed to expire) a virtual disk.  At some point in the
   5.196 +future, an "unsafe" undelete which can recover what remains of partially
   5.197 +reallocated virtual disks may also be implemented.
   5.198 +
   5.199 +Security note:
   5.200 +
   5.201 +The disk space for VDs is not zeroed when it is initially added to the free
   5.202 +space pool OR when a VD expires OR when a VD is created.  Therefore, if this is
   5.203 +not done manually it is possible for a domain to read a VD to determine what
   5.204 +was written by previous owners of its constituent extents.  If this is a
   5.205 +problem, users should manually clean VDs in some way either on allocation, or
   5.206 +just before deallocation (automated support for this may be added at a later
   5.207 +date).
   5.208 +
   5.209 +
   5.210 +Side note: The xvd* devices
   5.211 +---------------------------
   5.212 +
   5.213 +The examples in this document make frequent use of the xvd* device nodes for
   5.214 +representing virtual block devices.  It is not a requirement to use these with
   5.215 +Xen, since VBDs can be mapped to any IDE or SCSI device node in the system.
   5.216 +Changing the the references to xvd* nodes in the examples below to refer to
   5.217 +some unused hd* or sd* node would also be valid.
   5.218 +
   5.219 +They can be useful when accessing VBDs from dom0, since binding VBDs to xvd*
   5.220 +devices under will avoid clashes with real IDE or SCSI drives.
   5.221 +
   5.222 +There is a shell script provided in tools/misc/xen-mkdevnodes to create these
   5.223 +nodes.  Specify on the command line the directory that the nodes should be
   5.224 +placed under (e.g. /dev):
   5.225 +
   5.226 +> cd {root of Xen source tree}/tools/misc/
   5.227 +> ./xen-mkdevnodes /dev
   5.228 +
   5.229 +
   5.230 +Dynamically Registering VBDs
   5.231 +----------------------------
   5.232 +
   5.233 +The domain control tool (xc_dom_control.py) includes the ability to add and
   5.234 +remove VBDs to / from running domains.  As usual, the command format is:
   5.235 +
   5.236 +xc_dom_control.py [operation] [arguments]
   5.237 +
   5.238 +The operations (and their arguments) are as follows:
   5.239 +
   5.240 +vbd_add dom uname dev mode - Creates a VBD corresponding to either a physical
   5.241 +		             device or a virtual disk and adds it as a
   5.242 +		             specified device under the target domain, with
   5.243 +		             either read or write access.
   5.244 +
   5.245 +vbd_remove dom dev	   - Removes the VBD associated with a specified device
   5.246 +			     node from the target domain.
   5.247 +
   5.248 +These scripts are most useful when populating VDs.  VDs can't be populated
   5.249 +directly, since they don't correspond to real devices.  Using:
   5.250 +
   5.251 +  xc_dom_control.py vbd_add 0 vd:your_vd_id /dev/whatever w
   5.252 +
   5.253 +You can make a virtual disk available to DOM0.  Sensible devices to map VDs to
   5.254 +in DOM0 are the /dev/xvd* nodes, since that makes it obvious that they are Xen
   5.255 +virtual devices that don't correspond to real physical devices.
   5.256 +
   5.257 +You can then format, mount and populate the VD through the nominated device
   5.258 +node.  When you've finished, use:
   5.259 +
   5.260 +  xc_dom_control.py vbd_remove 0 /dev/whatever
   5.261 +
   5.262 +To revoke DOM0's access to it.  It's then ready for use in a guest domain.
   5.263 +
   5.264 +
   5.265 +
   5.266 +You can also use this functionality to grant access to a physical device to a
   5.267 +guest domain - you might use this to temporarily share a partition, or to add
   5.268 +access to a partition that wasn't granted at boot time.
   5.269 +
   5.270 +When playing with VBDs, remember that in general, it is only safe for two
   5.271 +domains to have access to a file system if they both have read-only access.  You
   5.272 +shouldn't be trying to share anything which is writable, even if only by one
   5.273 +domain, unless you're really sure you know what you're doing!
   5.274 +
   5.275 +
   5.276 +Granting access to real disks and partitions
   5.277 +--------------------------------------------
   5.278 +
   5.279 +During the boot process, Xen automatically creates a VBD for each physical disk
   5.280 +and gives Dom0 read / write access to it.  This makes it look like Dom0 has
   5.281 +normal access to the disks, just as if Xen wasn't being used - in reality, even
   5.282 +Dom0 talks to disks through Xen VBDs.
   5.283 +
   5.284 +To give another domain access to a partition or whole disk then you need to
   5.285 +create a corresponding VBD for that partition, for use by that domain.  As for
   5.286 +virtual disks, you can grant access to a running domain, or specify that the
   5.287 +domain should have access when it is first booted.
   5.288 +
   5.289 +To grant access to a physical partition or disk whilst a domain is running, use
   5.290 +the xc_dom_control.py script - the usage is very similar to the case of adding
   5.291 +access virtual disks to a running domain (described above).  Specify the device
   5.292 +as "phy:device", where device is the name of the device as seen from domain 0,
   5.293 +or from normal Linux without Xen.  For instance:
   5.294 +
   5.295 +> xc_dom_control.py vbd_add 2 phy:hdc /dev/whatever r
   5.296 +
   5.297 +Will grant domain 2 read-only access to the device /dev/hdc (as seen from Dom0
   5.298 +/ normal Linux running on the same machine - i.e. the master drive on the
   5.299 +secondary IDE chain), as /dev/whatever in the target domain.
   5.300 +
   5.301 +Note that you can use this within domain 0 to map disks / partitions to other
   5.302 +device nodes within domain 0.  For instance, you could map /dev/hda to also be
   5.303 +accessible through /dev/xvda.  This is not generally recommended, since if you
   5.304 +(for instance) mount both device nodes read / write you could cause corruption
   5.305 +to the underlying filesystem.  It's also quite confusing ;-)
   5.306 +
   5.307 +To grant a domain access to a partition or disk when it boots, the appropriate
   5.308 +VBD needs to be created before the domain is started.  This can be done very
   5.309 +easily using the tools provided.  To specify this to the xc_dom_create.py tool
   5.310 +(either in a startup script or on the command line) use triples of the format:
   5.311 +
   5.312 +  phy:dev,target_dev,perms
   5.313 +
   5.314 +Where dev is the device name as seen from Dom0, target_dev is the device you
   5.315 +want it to appear as in the target domain and perms is 'w' if you want to give
   5.316 +write privileges, or 'r' otherwise.
   5.317 +
   5.318 +These may either be specified on the command line or in an initialisation
   5.319 +script.  For instance, to grant the same access rights as described by the
   5.320 +command example above, you would use the triple:
   5.321 +
   5.322 +  phy:hdc,/dev/whatever,r
   5.323 +
   5.324 +If you are using a config file, then you should add this triple into the
   5.325 +vbd_list variable, for instance using the line:
   5.326 +
   5.327 +  vbd_list = [ ('phy:dev', 'hdc', 'r') ]
   5.328 +
   5.329 +(Note that you need to use quotes here, since config files are really small
   5.330 +Python scripts.)
   5.331 +
   5.332 +To specify the mapping on the command line, you'd use the -d switch and supply
   5.333 +the triple as the argument, e.g.:
   5.334 +
   5.335 +> xc_dom_create.py [other arguments] -d phy:hdc,/dev/whatever,r
   5.336 +
   5.337 +(You don't need to explicitly quote things in this case.)
   5.338 +
   5.339 +
   5.340 +Walk-through: Booting a domain from a VD
   5.341 +----------------------------------------
   5.342 +
   5.343 +As an example, here is a sequence of commands you might use to create a virtual
   5.344 +disk, populate it with a root file system and boot a domain from it.  These
   5.345 +steps assume that you've installed the example scripts somewhere on your PATH -
   5.346 +if you haven't done that, you'll need to specify a fully qualified pathname in
   5.347 +the examples below.  It is also assumed that you know how to use the
   5.348 +xc_dom_create.py tool (apart from configuring virtual disks!)
   5.349 +
   5.350 +[ This example is intended only for users of virtual disks (VDs).  You don't
   5.351 +need to follow this example if you'll be booting a domain from a dedicated
   5.352 +partition, since you can create that partition and populate it, directly from
   5.353 +Dom0, as normal. ]
   5.354 +
   5.355 +First, if you haven't done so already, you'll initialise the free space pool by
   5.356 +adding a real partition to it.  The details are stored in the database, so
   5.357 +you'll only need to do it once.  You can also use this command to add further
   5.358 +partitions to the existing free space pool.
   5.359 +
   5.360 +> xc_vd_tool.py format /dev/<real partition>
   5.361 +
   5.362 +Now you'll want to allocate the space for your virtual disk.  Do so using the
   5.363 +following, specifying the size in megabytes.
   5.364 +
   5.365 +> xc_vd_tool.py create <size in megabytes>
   5.366 +
   5.367 +At this point, the program will tell you the virtual disk ID.  Note it down, as
   5.368 +it is how you will identify the virtual device in future.
   5.369 +
   5.370 +If you don't want the VD to be bootable (i.e. you're booting a domain from some
   5.371 +other medium and just want it to be able to access this VD), you can simply add
   5.372 +it to the vbd_list used by xc_dom_create.py, either by putting it in a config
   5.373 +file or by specifying it on the command line.  Formatting / populating of the
   5.374 +VD could then done from that domain once it's started.
   5.375 +
   5.376 +If you want to boot off your new VD as well then you need to populate it with a
   5.377 +standard Linux root filesystem.  You'll need to temporarily add the VD to DOM0
   5.378 +in order to do this.  To give DOM0 r/w access to the VD, use the following
   5.379 +command line, substituting the ID you got earlier.
   5.380 +
   5.381 +> xc_dom_control.py vbd_add 0 vd:<id> /dev/xvda w
   5.382 +
   5.383 +This attaches the VD to the device /dev/xvda in domain zero, with read / write
   5.384 +privileges - you can use other devices nodes if you choose too.
   5.385 +
   5.386 +Now make a filesystem on this device, mount it and populate it with a root
   5.387 +filesystem.  These steps are exactly the same as under normal Linux.  When
   5.388 +you've finished, unmount the filesystem again.
   5.389 +
   5.390 +You should now remove the VD from DOM0.  This will prevent you accidentally
   5.391 +changing it in DOM0, whilst the guest domain is using it (which could cause
   5.392 +filesystem corruption, and confuse Linux).
   5.393 +
   5.394 +> xc_dom_control.py vbd_remove 0 /dev/xvda
   5.395 +
   5.396 +It should now be possible to boot a guest domain from the VD.  To do this, you
   5.397 +should specify the the VD's details in some way so that xc_dom_create.py will
   5.398 +be able to set up the corresponding VBD for the domain to access.  If you're
   5.399 +using a config file, you should include:
   5.400 +
   5.401 +  ('vd:<id>', '/dev/whatever', 'w')
   5.402 +
   5.403 +In the vbd_list, substituting the appropriate virtual disk ID, device node and
   5.404 +read / write setting.
   5.405 +
   5.406 +To specify access on the command line, as you start the domain, you would use
   5.407 +the -d switch (note that you don't need to use quote marks here):
   5.408 +
   5.409 +> xc_dom_create.py [other arguments] -d vd:<id>,/dev/whatever,w
   5.410 +
   5.411 +To tell Linux which device to boot from, you should either include:
   5.412 +
   5.413 +  root=/dev/whatever
   5.414 +
   5.415 +in your cmdline_root in the config file, or specify it on the command line,
   5.416 +using the -R option:
   5.417 +
   5.418 +> xc_dom_create.py [other arguments] -R root=/dev/whatever
   5.419 +
   5.420 +That should be it: sit back watch your domain boot off its virtual disk!
   5.421 +
   5.422 +
   5.423 +Getting help
   5.424 +------------
   5.425 +
   5.426 +The main source of help using Xen is the developer's e-mail list:
   5.427 +<xen-devel@lists.sourceforge.net>.  The developers will help with problems,
   5.428 +listen to feature requests and do bug fixes.  It is, however, helpful if you
   5.429 +can look through the mailing list archives and HOWTOs provided to make sure
   5.430 +your question is not answered there.  If you post to the list, please provide
   5.431 +as much information as possible about your setup and your problem.
   5.432 +
   5.433 +There is also a general Xen FAQ, kindly started by Jan van Rensburg, which (at
   5.434 +time of writing) is located at: <http://xen.epiuse.com/xen-faq.txt>.
   5.435 +
   5.436 +Contributing
   5.437 +------------
   5.438 +
   5.439 +Patches and extra documentation are also welcomed ;-) and should also be posted
   5.440 +to the xen-devel e-mail list.
     6.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
     6.2 +++ b/docs/HOWTOs/Xen-HOWTO	Wed Jun 09 15:42:10 2004 +0000
     6.3 @@ -0,0 +1,402 @@
     6.4 +###########################################
     6.5 +Xen HOWTO
     6.6 +
     6.7 +University of Cambridge Computer Laboratory
     6.8 +
     6.9 +http://www.cl.cam.ac.uk/netos/xen
    6.10 +#############################
    6.11 +
    6.12 +
    6.13 +Get Xen Source Code
    6.14 +=============================
    6.15 +
    6.16 +The public master BK repository for the 1.2 release lives at:
    6.17 +'bk://xen.bkbits.net/xeno-1.2.bk'
    6.18 +The current unstable release (1.3) is available at:
    6.19 +'bk://xen.bkbits.net/xeno-unstable.bk'
    6.20 +
    6.21 +To fetch a local copy, first download the BitKeeper tools at:
    6.22 +http://www.bitmover.com/download with username 'bitkeeper' and
    6.23 +password 'get bitkeeper'.
    6.24 +
    6.25 +Then install the tools and then run:
    6.26 +# bk clone bk://xen.bkbits.net/xeno-1.2.bk
    6.27 +
    6.28 +Under your current directory, a new directory named 'xeno-1.2.bk' has
    6.29 +been created, which contains all the necessary source codes for the
    6.30 +Xen hypervisor and Linux guest OSes.
    6.31 +
    6.32 +To get newest changes to the repository, run
    6.33 +# cd xeno-1.2.bk
    6.34 +# bk pull
    6.35 +
    6.36 +
    6.37 +Configuring Xen
    6.38 +=============================
    6.39 +
    6.40 +Xen's build configuration is managed via a set of environment
    6.41 +variables. These should be set before invoking make 
    6.42 +(e.g., 'export debug=y; make', 'debug=y make').
    6.43 +
    6.44 +The options that can be configured are as follows (all options default
    6.45 +to 'n' or off):
    6.46 +
    6.47 + debug=y    -- Enable debug assertions and console output.
    6.48 +               (Primarily useful for tracing bugs in Xen).
    6.49 +
    6.50 + debugger=y -- Enable the in-Xen pervasive debugger (PDB).
    6.51 +               This can be used to debug Xen, guest OSes, and
    6.52 +               applications. For more information see the 
    6.53 +               XenDebugger-HOWTO.
    6.54 +
    6.55 + old_drivers=y -- Enable the old hardware-device architecture, in
    6.56 +               which network and block devices are managed by
    6.57 +               Xen. The new (and default) model requires such
    6.58 +               devices to be managed by a suitably-privileged
    6.59 +               guest OS (e.g., within domain 0).
    6.60 +
    6.61 + perfc=y    -- Enable performance-counters for significant events
    6.62 +               within Xen. The counts can be reset or displayed
    6.63 +               on Xen's console via console control keys.
    6.64 +
    6.65 + trace=y    -- Enable per-cpu trace buffers which log a range of
    6.66 +               events within Xen for collection by control
    6.67 +               software.
    6.68 +
    6.69 +
    6.70 +Build Xen
    6.71 +=============================
    6.72 +
    6.73 +Hint: To see how to build Xen and all the control tools, inspect the
    6.74 +tools/misc/xen-clone script in the BK repository. This script can be
    6.75 +used to clone the repository and perform a full build.
    6.76 +
    6.77 +To build Xen manually:
    6.78 +
    6.79 +# cd xeno-1.2.bk/xen
    6.80 +# make clean
    6.81 +# make
    6.82 +
    6.83 +This will (should) produce a file called 'xen' in the current
    6.84 +directory.  This is the ELF 32-bit LSB executable file of Xen.  You
    6.85 +can also find a gzip version, named 'xen.gz'.
    6.86 +
    6.87 +To install the built files on your server under /usr, type 'make
    6.88 +install' at the root of the BK repository. You will need to be root to
    6.89 +do this!
    6.90 +
    6.91 +Hint: There is also a 'make dist' rule which copies built files to an
    6.92 +install directory just outside the BK repo; if this suits your setup,
    6.93 +go for it.
    6.94 +
    6.95 +
    6.96 +Build Linux as a Xen guest OS
    6.97 +==============================
    6.98 +
    6.99 +This is a little more involved since the repository only contains a
   6.100 +"sparse" tree -- this is essentially an 'overlay' on a standard linux
   6.101 +kernel source tree. It contains only those files currently 'in play'
   6.102 +which are either modified versions of files in the vanilla linux tree,
   6.103 +or brand new files specific to the Xen port.
   6.104 +
   6.105 +So, first you need a vanilla linux-2.4.26 tree, which is located at:
   6.106 +http://www.kernel.org/pub/linux/kernel/v2.4
   6.107 +
   6.108 +Then:
   6.109 +  # mv linux-2.4.26.tar.gz /xeno-1.2.bk
   6.110 +  # cd /xeno-1.2.bk
   6.111 +  # tar -zxvf linux-2.4.26.tar.gz
   6.112 +
   6.113 +You'll find a new directory 'linux-2.4.26' which contains all
   6.114 +the vanilla Linux 2.4.26 kernel source codes.
   6.115 +
   6.116 +Hint: You should choose the vanilla linux kernel tree that has the
   6.117 +same version as the "sparse" tree.
   6.118 +
   6.119 +Next, you need to 'overlay' this sparse tree on the full vanilla Linux
   6.120 +kernel tree:
   6.121 +
   6.122 +  # cd /xeno-1.2.bk/xenolinux-2.4.26-sparse
   6.123 +  # ./mkbuildtree ../linux-2.4.26
   6.124 +
   6.125 +Finally, rename the buildtree since it is now a 'xenolinux' buildtree. 
   6.126 +
   6.127 +  # cd /xeno-1.2.bk
   6.128 +  # mv linux-2.4.26 xenolinux-2.4.26
   6.129 +
   6.130 +Now that the buildtree is there, you can build the xenolinux kernel.
   6.131 +The default configuration should work fine for most people (use 'make
   6.132 +oldconfig') but you can customise using one of the other config tools
   6.133 +if you want.
   6.134 +
   6.135 +  # cd /xeno-1.2.bk/xenolinux-2.4.26
   6.136 +  # ARCH=xen make oldconfig   { or menuconfig, or xconfig, or config }  
   6.137 +  # ARCH=xen make dep
   6.138 +  # ARCH=xen make bzImage
   6.139 +
   6.140 +Assuming the build works, you'll end up with
   6.141 +/xeno-1.2.bk/xenolinux-2.4.26/arch/xen/boot/xenolinux.gz. This is the
   6.142 +gzip version of XenoLinux kernel image.
   6.143 +
   6.144 +
   6.145 +Build the Domain Control Tools
   6.146 +==============================
   6.147 +
   6.148 +Under '/xeno-1.2.bk/tools', there are three sub-directories:
   6.149 +'balloon', 'xc' and 'misc', each containing
   6.150 +a group of tools. You can enter any of the four sub-directories
   6.151 +and type 'make' to compile the corresponding group of tools.
   6.152 +Or you can type 'make' under '/xeno-1.2.bk/tools' to compile
   6.153 +all the tools.
   6.154 +
   6.155 +In order to compile the control-interface library in 'xc' you must
   6.156 +have zlib and development headers installed. Also you will need at
   6.157 +least Python v2.2. 
   6.158 +
   6.159 +'make install' in the tools directory will place executables and
   6.160 +libraries in /usr/bin and /usr/lib. You will need to be root to do this!
   6.161 +
   6.162 +As noted earlier, 'make dist' installs files to a local 'install'
   6.163 +directory just outside the BK repository. These files will then need
   6.164 +to be installed manually onto the server.
   6.165 +
   6.166 +The Example Scripts
   6.167 +===================
   6.168 +
   6.169 +The scripts in tools/examples/ are generally useful for
   6.170 +administering a Xen-based system.  You can install them by running
   6.171 +'make install' in that directory.
   6.172 +
   6.173 +The python scripts (*.py) are the main tools for controlling
   6.174 +Xen domains.
   6.175 +
   6.176 +'defaults' and 'democd' are example configuration files for starting
   6.177 +new domains.
   6.178 +
   6.179 +'xendomains' is a Sys-V style init script for starting and stopping
   6.180 +Xen domains when the system boots / shuts down.
   6.181 +
   6.182 +These will be discussed below in more detail.
   6.183 +
   6.184 +
   6.185 +Installation
   6.186 +==============================
   6.187 +
   6.188 +First:
   6.189 +# cp /xen-1.2.bk/xen/xen.gz /boot/xen.gz
   6.190 +# cp /xen-1.2.bk/xenolinux-2.4.26/arch/xen/boot/xenolinux.gz /boot/xenolinux.gz
   6.191 +
   6.192 +Second, you must have 'GNU Grub' installed. Then you need to edit
   6.193 +the Grub configuration file '/boot/grub/menu.lst'.
   6.194 +
   6.195 +A typical Grub menu option might look like:
   6.196 +
   6.197 +title Xen 1.2 / XenoLinux 2.4.26
   6.198 +        kernel /boot/xen.gz dom0_mem=131072 com1=115200,8n1 noht
   6.199 +        module /boot/xenolinux.gz root=/dev/sda4 ro console=tty0
   6.200 +
   6.201 +The first line specifies which Xen image to use, and what command line
   6.202 +arguments to pass to Xen. In this case we set the maximum amount of
   6.203 +memory to allocate to domain0, and enable serial I/O at 115200 baud.
   6.204 +We could also disable smp support (nosmp) or disable hyper-threading
   6.205 +support (noht). If you have multiple network interface you can use
   6.206 +ifname=ethXX to select which one to use. If your network card is
   6.207 +unsupported, use ifname=dummy
   6.208 +
   6.209 +The second line specifies which XenoLinux image to use, and the
   6.210 +standard linux command line arguments to pass to the kernel. In this
   6.211 +case, we're configuring the root partition and stating that it should
   6.212 +(initially) be mounted read-only (normal practice). 
   6.213 +
   6.214 +The following is a list of command line arguments to pass to Xen:
   6.215 +
   6.216 + ignorebiostables Disable parsing of BIOS-supplied tables. This may
   6.217 +                  help with some chipsets that aren't fully supported
   6.218 +                  by Xen. If you specify this option then ACPI tables are
   6.219 +                  also ignored, and SMP support is disabled.
   6.220 +
   6.221 + noreboot         Don't reboot the machine automatically on errors.
   6.222 +                  This is useful to catch debug output if you aren't
   6.223 +                  catching console messages via the serial line.
   6.224 +
   6.225 + nosmp            Disable SMP support.
   6.226 +                  This option is implied by 'ignorebiostables'.
   6.227 +
   6.228 + noacpi           Disable ACPI tables, which confuse Xen on some chipsets.
   6.229 +                  This option is implied by 'ignorebiostables'.
   6.230 +
   6.231 + watchdog         Enable NMI watchdog which can report certain failures.
   6.232 +
   6.233 + noht             Disable Hyperthreading.
   6.234 +
   6.235 + ifname=ethXX     Select which Ethernet interface to use.
   6.236 +
   6.237 + ifname=dummy     Don't use any network interface.
   6.238 +
   6.239 + com1=<baud>,DPS[,<io_base>,<irq>]
   6.240 + com2=<baud>,DPS[,<io_base>,<irq>]
   6.241 +                  Xen supports up to two 16550-compatible serial ports.
   6.242 +                  For example: 'com1=9600,8n1,0x408,5' maps COM1 to a
   6.243 +                  9600-baud port, 8 data bits, no parity, 1 stop bit,
   6.244 +                  I/O port base 0x408, IRQ 5.
   6.245 +                  If the I/O base and IRQ are standard (com1:0x3f8,4;
   6.246 +                  com2:0x2f8,3) then they need not be specified.
   6.247 +
   6.248 + console=<specifier list>
   6.249 +                  Specify the destination for Xen console I/O.
   6.250 +                  This is a comma-separated list of, for example:
   6.251 +                   vga:  use VGA console and allow keyboard input
   6.252 +                   com1: use serial port com1
   6.253 +                   com2H: use serial port com2. Transmitted chars will
   6.254 +                          have the MSB set. Received chars must have
   6.255 +                          MSB set.
   6.256 +                   com2L: use serial port com2. Transmitted chars will
   6.257 +                          have the MSB cleared. Received chars must
   6.258 +                          have MSB cleared.
   6.259 +                  The latter two examples allow a single port to be
   6.260 +                  shared by two subsystems (eg. console and
   6.261 +                  debugger). Sharing is controlled by MSB of each
   6.262 +                  transmitted/received character.
   6.263 + [NB. Default for this option is 'com1,tty']
   6.264 +
   6.265 + dom0_mem=xxx     Set the maximum amount of memory for domain0.
   6.266 +
   6.267 + tbuf_size=xxx    Set the size of the per-cpu trace buffers, in pages
   6.268 +                  (default 1).  Note that the trace buffers are only
   6.269 +                  enabled in debug builds.  Most users can ignore
   6.270 +                  this feature completely.
   6.271 +
   6.272 + sched=xxx        Select the CPU scheduler Xen should use.  The current
   6.273 +                  possibilities are 'bvt', 'atropos' and 'rrobin'.  The
   6.274 +                  default is 'bvt'.  For more information see
   6.275 +                  Sched-HOWTO.txt.
   6.276 +
   6.277 +Boot into Domain 0
   6.278 +==============================
   6.279 +
   6.280 +Reboot your computer; After selecting the kernel to boot, stand back
   6.281 +and watch Xen boot, closely followed by "domain 0" running the
   6.282 +XenoLinux kernel.  Depending on which root partition you have assigned
   6.283 +to XenoLinux kernel in Grub configuration file, you can use the
   6.284 +corresponding username / password to log in.
   6.285 +
   6.286 +Once logged in, it should look just like any regular linux box. All
   6.287 +the usual tools and commands should work as per usual.
   6.288 +
   6.289 +
   6.290 +Start New Domains
   6.291 +==============================
   6.292 +
   6.293 +You must be 'root' to start new domains.
   6.294 +
   6.295 +Make sure you have successfully configured at least one
   6.296 +physical network interface. Then:
   6.297 +
   6.298 +# xen_nat_enable
   6.299 +
   6.300 +The xc_dom_create.py program is useful for starting Xen domains.
   6.301 +You can specify configuration files using the -f switch on the command
   6.302 +line.  The default configuration is in /etc/xc/defaults.  You can
   6.303 +create custom versions of this to suit your local configuration.
   6.304 +
   6.305 +You can override the settings in a configuration file using command
   6.306 +line arguments to xc_dom_create.py.  However, you may find it simplest
   6.307 +to create a separate configuration file for each domain you start.
   6.308 +
   6.309 +xc_dom_create.py will print the local TCP port to which you should
   6.310 +connect to perform console I/O. A suitable console client is provided
   6.311 +by the Python module xenctl.console_client: running this module from
   6.312 +the command line with <host> and <port> parameters will start a
   6.313 +terminal session. This module is also installed as /usr/bin/xencons,
   6.314 +from a copy in tools/misc/xencons.  An alternative to manually running
   6.315 +a terminal client is to specify '-c' to xc_dom_create.py, or add
   6.316 +'auto_console=True' to the defaults file. This will cause
   6.317 +xc_dom_create.py to automatically become the console terminal after
   6.318 +starting the domain.
   6.319 +
   6.320 +Boot-time output will be directed to this console by default, because
   6.321 +the console name is tty0. It is also possible to log in via the
   6.322 +virtual console --- once again, your normal startup scripts will work
   6.323 +as normal (e.g., by running mingetty on tty1-7).  The device node to
   6.324 +which the virtual console is attached can be configured by specifying
   6.325 +'xencons=' on the OS command line: 
   6.326 + 'xencons=off' --> disable virtual console
   6.327 + 'xencons=tty' --> attach console to /dev/tty1 (tty0 at boot-time)
   6.328 + 'xencons=ttyS' --> attach console to /dev/ttyS0
   6.329 +
   6.330 +
   6.331 +Manage Running Domains
   6.332 +==============================
   6.333 +
   6.334 +You can see a list of existing domains with:
   6.335 +# xc_dom_control.py list
   6.336 +
   6.337 +In order to stop a domain, you use:
   6.338 +# xc_dom_control.py stop <domain_id>
   6.339 +
   6.340 +To shutdown a domain cleanly use:
   6.341 +# xc_dom_control.py shutdown <domain_id>
   6.342 +
   6.343 +To destroy a domain immediately:
   6.344 +# xc_dom_control.py destroy <domain_id>
   6.345 +
   6.346 +There are other more advanced options, including pinning domains to
   6.347 +specific CPUs and saving / resuming domains to / from disk files.  To
   6.348 +get more information, run the tool without any arguments:
   6.349 +# xc_dom_control.py
   6.350 +
   6.351 +There is more information available in the Xen README files, the
   6.352 +VBD-HOWTO and the contributed FAQ / HOWTO documents on the web.
   6.353 +
   6.354 +
   6.355 +Other Control Tasks using Python
   6.356 +================================
   6.357 +
   6.358 +A Python module 'Xc' is installed as part of the tools-install
   6.359 +process. This can be imported, and an 'xc object' instantiated, to
   6.360 +provide access to privileged command operations:
   6.361 +
   6.362 +# import Xc
   6.363 +# xc = Xc.new()
   6.364 +# dir(xc)
   6.365 +# help(xc.domain_create)
   6.366 +
   6.367 +In this way you can see that the class 'xc' contains useful
   6.368 +documentation for you to consult.
   6.369 +
   6.370 +A further package of useful routines (xenctl) is also installed:
   6.371 +
   6.372 +# import xenctl.utils
   6.373 +# help(xenctl.utils)
   6.374 +
   6.375 +You can use these modules to write your own custom scripts or you can
   6.376 +customise the scripts supplied in the Xen distribution.
   6.377 +
   6.378 +
   6.379 +Automatically start / stop domains at boot / shutdown
   6.380 +=====================================================
   6.381 +
   6.382 +A Sys-V style init script for RedHat systems is provided in
   6.383 +tools/examples/xendomains.  When you run 'make install' in that
   6.384 +directory, it should be automatically copied to /etc/init.d/.  You can
   6.385 +then enable it using the chkconfig command, e.g.:
   6.386 +
   6.387 +# chkconfig --add xendomains
   6.388 +
   6.389 +By default, this will start the boot-time domains in runlevels 3, 4
   6.390 +and 5.  To specify a domain is to start at boot-time, place its
   6.391 +configuration file (or a link to it) under /etc/xc/auto/.
   6.392 +
   6.393 +The script will also stop ALL domains when the system is shut down,
   6.394 +even domains that it did not start originally.
   6.395 +
   6.396 +You can also use the "service" command (part of the RedHat standard
   6.397 +distribution) to run this script manually, e.g:
   6.398 +
   6.399 +# service xendomains start
   6.400 +
   6.401 +Starts all the domains with config files under /etc/xc/auto/.
   6.402 +
   6.403 +# service xendomains stop
   6.404 +
   6.405 +Shuts down ALL running Xen domains.
     7.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
     7.2 +++ b/docs/HOWTOs/XenDebugger-HOWTO	Wed Jun 09 15:42:10 2004 +0000
     7.3 @@ -0,0 +1,273 @@
     7.4 +Pervasive Debugging 
     7.5 +===================
     7.6 +
     7.7 +Alex Ho (alex.ho at cl.cam.ac.uk)
     7.8 +
     7.9 +Introduction
    7.10 +------------
    7.11 +
    7.12 +The pervasive debugging project is leveraging Xen to 
    7.13 +debug distributed systems.  We have added a gdb stub
    7.14 +to Xen to allow for remote debugging of both Xen and
    7.15 +guest operating systems.  More information about the
    7.16 +pervasive debugger is available at: http://www.cl.cam.ac.uk/netos/pdb
    7.17 +
    7.18 +
    7.19 +Implementation
    7.20 +--------------
    7.21 +
    7.22 +The gdb stub communicates with gdb running over a serial line.
    7.23 +The main entry point is pdb_handle_exception() which is invoked
    7.24 +from:    pdb_key_pressed()    ('D' on the console)
    7.25 +         do_int3_exception()  (interrupt 3: breakpoint exception)
    7.26 +         do_debug()           (interrupt 1: debug exception)
    7.27 +
    7.28 +This accepts characters from the serial port and passes gdb
    7.29 +commands to pdb_process_command() which implements the gdb stub
    7.30 +interface.  This file draws heavily from the kgdb project and
    7.31 +sample gdbstub provided with gdb.
    7.32 +
    7.33 +The stub can examine registers, single step and continue, and
    7.34 +read and write memory (in Xen, a domain, or a Linux process'
    7.35 +address space).  The debugger does not currently trace the 
    7.36 +current process, so all bets are off if context switch occurs
    7.37 +in the domain.
    7.38 +
    7.39 +
    7.40 +Setup
    7.41 +-----
    7.42 +
    7.43 + +-------+ telnet +-----------+ serial +-------+ 
    7.44 + |  GDB  |--------|  nsplitd  |--------|  Xen  |
    7.45 + +-------+        +-----------+        +-------+ 
    7.46 +
    7.47 +To run pdb, Xen must be appropriately configured and 
    7.48 +a suitable serial interface attached to the target machine.
    7.49 +GDB and nsplitd can run on the same machine.
    7.50 +
    7.51 +Xen Configuration
    7.52 +
    7.53 +  Add the "pdb=xxx" option to your Xen boot command line
    7.54 +  where xxx is one of the following values:
    7.55 +     com1    gdb stub should communicate on com1
    7.56 +     com1H   gdb stub should communicate on com1 (with high bit set)
    7.57 +     com2    gdb stub should communicate on com2
    7.58 +     com2H   gdb stub should communicate on com2 (with high bit set)
    7.59 +
    7.60 +  Symbolic debugging infomration is quite helpful too:
    7.61 +  xeno.bk/xen/arch/i386/Rules.mk
    7.62 +    add -g to CFLAGS to compile Xen with symbols
    7.63 +  xeno.bk/xenolinux-2.4.24-sparse/arch/xen/Makefile
    7.64 +    add -g to CFLAGS to compile Linux with symbols
    7.65 +
    7.66 +  You may also want to consider dedicating a register to the
    7.67 +  frame pointer (disable the -fomit-frame-pointer compile flag).
    7.68 +
    7.69 +  When booting Xen and domain 0, look for the console text 
    7.70 +  "Initializing pervasive debugger (PDB)" just before DOM0 starts up.
    7.71 +
    7.72 +Serial Port Configuration
    7.73 +
    7.74 +  pdb expects to communicate with gdb using the serial port.  Since 
    7.75 +  this port is often shared with the machine's console output, pdb can
    7.76 +  discriminate its communication by setting the high bit of each byte.
    7.77 +
    7.78 +  A new tool has been added to the source tree which splits 
    7.79 +  the serial output from a remote machine into two streams: 
    7.80 +  one stream (without the high bit) is the console and 
    7.81 +  one stream (with the high bit stripped) is the pdb communication.
    7.82 +
    7.83 +  See:  xeno.bk/tools/nsplitd
    7.84 +
    7.85 +  nsplitd configuration
    7.86 +  ---------------------
    7.87 +  hostname$ more /etc/xinetd.d/nsplit
    7.88 +  service nsplit1
    7.89 +  {
    7.90 +        socket_type             = stream
    7.91 +        protocol                = tcp
    7.92 +        wait                    = no
    7.93 +        user                    = wanda
    7.94 +        server                  = /usr/sbin/in.nsplitd
    7.95 +        server_args             = serial.cl.cam.ac.uk:wcons00
    7.96 +        disable                 = no
    7.97 +        only_from               = 128.232.0.0/17 127.0.0.1
    7.98 +  }
    7.99 +
   7.100 +  hostname$ egrep 'wcons00|nsplit1' /etc/services
   7.101 +  wcons00         9600/tcp        # Wanda remote console
   7.102 +  nsplit1         12010/tcp       # Nemesis console splitter ports.
   7.103 +
   7.104 +  Note: nsplitd was originally written for the Nemesis project
   7.105 +  at Cambridge.
   7.106 +
   7.107 +  After nsplitd accepts a connection on <port> (12010 in the above
   7.108 +  example), it starts listening on port <port + 1>.  Characters sent 
   7.109 +  to the <port + 1> will have the high bit set and vice versa for 
   7.110 +  characters received.
   7.111 +
   7.112 +  You can connect to the nsplitd using
   7.113 +  'tools/xenctl/lib/console_client.py <host> <port>'
   7.114 +
   7.115 +GDB 6.0
   7.116 +  pdb has been tested with gdb 6.0.  It should also work with
   7.117 +  earlier versions.
   7.118 +
   7.119 +
   7.120 +Usage
   7.121 +-----
   7.122 +
   7.123 +1. Boot Xen and Linux
   7.124 +2. Interrupt Xen by pressing 'D' at the console
   7.125 +   You should see the console message: 
   7.126 +   (XEN) pdb_handle_exception [0x88][0x101000:0xfc5e72ac]
   7.127 +   At this point Xen is frozen and the pdb stub is waiting for gdb commands 
   7.128 +   on the serial line.
   7.129 +3. Attach with gdb
   7.130 +   (gdb) file xeno.bk/xen/xen
   7.131 +   Reading symbols from xeno.bk/xen/xen...done.
   7.132 +   (gdb) target remote <hostname>:<port + 1>              /* contact nsplitd */
   7.133 +   Remote debugging using serial.srg:12131
   7.134 +   continue_cpu_idle_loop () at current.h:10
   7.135 +   warning: shared library handler failed to enable breakpoint
   7.136 +   (gdb) break __enter_scheduler
   7.137 +   Breakpoint 1 at 0xfc510a94: file schedule.c, line 330.
   7.138 +   (gdb) cont
   7.139 +   Continuing.
   7.140 +
   7.141 +   Program received signal SIGTRAP, Trace/breakpoint trap.
   7.142 +   __enter_scheduler () at schedule.c:330
   7.143 +   (gdb) step
   7.144 +   (gdb) step
   7.145 +   (gdb) print next            /* the variable prev has been optimized away! */
   7.146 +   $1 = (struct task_struct *) 0x0
   7.147 +   (gdb) delete
   7.148 +   Delete all breakpoints? (y or n) y
   7.149 +4. You can add additional symbols to gdb
   7.150 +   (gdb) add-sym xenolinux-2.4.24/vmlinux
   7.151 +   add symbol table from file "xenolinux-2.4.24/vmlinux" at
   7.152 +   (y or n) y
   7.153 +   Reading symbols from xenolinux-2.4.24/vmlinux...done.
   7.154 +   (gdb) x/s cpu_vendor_names[0]
   7.155 +   0xc01530d2 <cpdext+62898>:	 "Intel"
   7.156 +   (gdb) break free_uid
   7.157 +   Breakpoint 2 at 0xc0012250
   7.158 +   (gdb) cont
   7.159 +   Continuing.                                  /* run a command in domain 0 */
   7.160 +
   7.161 +   Program received signal SIGTRAP, Trace/breakpoint trap.
   7.162 +   free_uid (up=0xbffff738) at user.c:77
   7.163 +
   7.164 +   (gdb) print *up
   7.165 +   $2 = {__count = {counter = 0}, processes = {counter = 135190120}, files = {
   7.166 +       counter = 0}, next = 0x395, pprev = 0xbffff878, uid = 134701041}
   7.167 +   (gdb) finish
   7.168 +   Run till exit from #0  free_uid (up=0xbffff738) at user.c:77
   7.169 +
   7.170 +   Program received signal SIGTRAP, Trace/breakpoint trap.
   7.171 +   release_task (p=0xc2da0000) at exit.c:51
   7.172 +   (gdb) print *p
   7.173 +   $3 = {state = 4, flags = 4, sigpending = 0, addr_limit = {seg = 3221225472},
   7.174 +     exec_domain = 0xc016a040, need_resched = 0, ptrace = 0, lock_depth = -1, 
   7.175 +     counter = 1, nice = 0, policy = 0, mm = 0x0, processor = 0, 
   7.176 +     cpus_runnable = 1, cpus_allowed = 4294967295, run_list = {next = 0x0, 
   7.177 +       prev = 0x0}, sleep_time = 18995, next_task = 0xc017c000, 
   7.178 +     prev_task = 0xc2f94000, active_mm = 0x0, local_pages = {next = 0xc2da0054,
   7.179 +       prev = 0xc2da0054}, allocation_order = 0, nr_local_pages = 0, 
   7.180 +     ...
   7.181 +5. To resume Xen, enter the "continue" command to gdb.
   7.182 +   This sends the packet $c#63 along the serial channel.
   7.183 +
   7.184 +   (gdb) cont
   7.185 +   Continuing.
   7.186 +
   7.187 +Debugging Multiple Domains & Processes
   7.188 +--------------------------------------
   7.189 +
   7.190 +pdb supports debugging multiple domains & processes.  You can switch
   7.191 +between different domains and processes within domains and examine
   7.192 +variables in each.
   7.193 +
   7.194 +The pdb context identifies the current debug target.  It is stored
   7.195 +in the xen variable pdb_ctx and defaults to xen.
   7.196 +
   7.197 +   target    pdb_ctx.domain    pdb_ctx.process
   7.198 +   ------    --------------    ---------------
   7.199 +    xen           -1                 -1
   7.200 +  guest os      0,1,2,...            -1
   7.201 +   process      0,1,2,...          0,1,2,...
   7.202 +
   7.203 +Unfortunately, gdb doesn't understand debugging multiple process
   7.204 +simultaneously (we're working on it), so at present you are limited 
   7.205 +to just one set of symbols for symbolic debugging.  When debugging
   7.206 +processes, pdb currently supports just Linux 2.4.
   7.207 +
   7.208 +   define setup
   7.209 +      file xeno-clone/xeno.bk/xen/xen
   7.210 +      add-sym xeno-clone/xenolinux-2.4.25/vmlinux
   7.211 +      add-sym ~ach61/a.out
   7.212 +   end
   7.213 +
   7.214 +
   7.215 +1. Connect with gdb as before.  A couple of Linux-specific 
   7.216 +   symbols need to be defined.
   7.217 +
   7.218 +   (gdb) target remote <hostname>:<port + 1>              /* contact nsplitd */
   7.219 +   Remote debugging using serial.srg:12131
   7.220 +   continue_cpu_idle_loop () at current.h:10
   7.221 +   warning: shared library handler failed to enable breakpoint
   7.222 +   (gdb) set pdb_pidhash_addr = &pidhash
   7.223 +   (gdb) set pdb_init_task_union_addr = &init_task_union
   7.224 +
   7.225 +2. The pdb context defaults to Xen and we can read Xen's memory.
   7.226 +   An attempt to access domain 0 memory fails.
   7.227 +  
   7.228 +   (gdb) print pdb_ctx
   7.229 +   $1 = {valid = 0, domain = -1, process = -1, ptbr = 1052672}
   7.230 +   (gdb) print hexchars
   7.231 +   $2 = "0123456789abcdef"
   7.232 +   (gdb) print cpu_vendor_names
   7.233 +   Cannot access memory at address 0xc0191f80
   7.234 +
   7.235 +3. Now we change to domain 0.  In addition to changing pdb_ctx.domain,
   7.236 +   we need to change pdb_ctx.valid to signal pdb of the change.
   7.237 +   It is now possible to examine Xen and Linux memory.
   7.238 +
   7.239 +   (gdb) set pdb_ctx.domain=0
   7.240 +   (gdb) set pdb_ctx.valid=1
   7.241 +   (gdb) print hexchars
   7.242 +   $3 = "0123456789abcdef"
   7.243 +   (gdb) print cpu_vendor_names
   7.244 +   $4 = {0xc0158b46 "Intel", 0xc0158c37 "Cyrix", 0xc0158b55 "AMD", 
   7.245 +     0xc0158c3d "UMC", 0xc0158c41 "NexGen", 0xc0158c48 "Centaur", 
   7.246 +     0xc0158c50 "Rise", 0xc0158c55 "Transmeta"}
   7.247 +
   7.248 +4. Now change to a process within domain 0.  Again, we need to
   7.249 +   change pdb_ctx.valid in addition to pdb_ctx.process.
   7.250 +
   7.251 +   (gdb) set pdb_ctx.process=962
   7.252 +   (gdb) set pdb_ctx.valid =1
   7.253 +   (gdb) print pdb_ctx
   7.254 +   $1 = {valid = 0, domain = 0, process = 962, ptbr = 52998144}
   7.255 +   (gdb) print aho_a
   7.256 +   $2 = 20
   7.257 +
   7.258 +5. Now we can read the same variable from another process running
   7.259 +   the same executable in another domain.
   7.260 +
   7.261 +   (gdb) set pdb_ctx.domain=1
   7.262 +   (gdb) set pdb_ctx.process=1210
   7.263 +   (gdb) set pdb_ctx.valid=1
   7.264 +   (gdb) print pdb_ctx
   7.265 +   $3 = {valid = 0, domain = 1, process = 1210, ptbr = 70574080}
   7.266 +   (gdb) print aho_a
   7.267 +   $4 = 27
   7.268 +
   7.269 +
   7.270 +
   7.271 +
   7.272 +Changes
   7.273 +-------
   7.274 +
   7.275 +04.02.05 aho creation
   7.276 +04.03.31 aho add description on debugging multiple domains
     8.1 --- a/docs/Sched-HOWTO.txt	Wed Jun 09 12:41:57 2004 +0000
     8.2 +++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
     8.3 @@ -1,135 +0,0 @@
     8.4 -Xen Scheduler HOWTO
     8.5 -===================
     8.6 -
     8.7 -by Mark Williamson
     8.8 -(c) 2004 Intel Research Cambridge
     8.9 -
    8.10 -
    8.11 -Introduction
    8.12 -------------
    8.13 -
    8.14 -Xen offers a choice of CPU schedulers.  All available schedulers are
    8.15 -included in Xen at compile time and the administrator may select a
    8.16 -particular scheduler using a boot-time parameter to Xen.  It is
    8.17 -expected that administrators will choose the scheduler most
    8.18 -appropriate to their application and configure the machine to boot
    8.19 -with that scheduler.
    8.20 -
    8.21 -Note: the default scheduler is the Borrowed Virtual Time (BVT)
    8.22 -scheduler which was also used in previous releases of Xen.  No
    8.23 -configuration changes are required to keep using this scheduler.
    8.24 -
    8.25 -This file provides a brief description of the CPU schedulers available
    8.26 -in Xen, what they are useful for and the parameters that are used to
    8.27 -configure them.  This information is necessarily fairly technical at
    8.28 -the moment.  The recommended way to fully understand the scheduling
    8.29 -algorithms is to read the relevant research papers.
    8.30 -
    8.31 -The interface to the schedulers is basically "raw" at the moment,
    8.32 -without sanity checking - administrators should be careful when
    8.33 -setting the parameters since it is possible for a mistake to hang
    8.34 -domains, or the entire system (in particular, double check parameters
    8.35 -for sanity and make sure that DOM0 will get enough CPU time to remain
    8.36 -usable).  Note that xc_dom_control.py takes time values in
    8.37 -nanoseconds.
    8.38 -
    8.39 -Future tools will implement friendlier control interfaces.
    8.40 -
    8.41 -
    8.42 -Borrowed Virtual Time (BVT)
    8.43 ----------------------------
    8.44 -
    8.45 -All releases of Xen have featured the BVT scheduler, which is used to
    8.46 -provide proportional fair shares of the CPU based on weights assigned
    8.47 -to domains.  BVT is "work conserving" - the CPU will never be left
    8.48 -idle if there are runnable tasks.
    8.49 -
    8.50 -BVT uses "virtual time" to make decisions on which domain should be
    8.51 -scheduled on the processor.  Each time a scheduling decision is
    8.52 -required, BVT evaluates the "Effective Virtual Time" of all domains
    8.53 -and then schedules the domain with the least EVT.  Domains are allowed
    8.54 -to "borrow" virtual time by "time warping", which reduces their EVT by
    8.55 -a certain amount, so that they may be scheduled sooner.  In order to
    8.56 -maintain long term fairness, there are limits on when a domain can
    8.57 -time warp and for how long.  [ For more details read the SOSP'99 paper
    8.58 -by Duda and Cheriton ]
    8.59 -
    8.60 -In the Xen implementation, domains time warp when they unblock, so
    8.61 -that domain wakeup latencies are reduced.
    8.62 -
    8.63 -The BVT algorithm uses the following per-domain parameters (set using
    8.64 -xc_dom_control.py cpu_bvtset):
    8.65 -
    8.66 -* mcuadv - the MCU (Minimum Charging Unit) advance determines the
    8.67 -           proportional share of the CPU that a domain receives.  It
    8.68 -           is set inversely proportionally to a domain's sharing weight.
    8.69 -* warp   - the amount of "virtual time" the domain is allowed to warp
    8.70 -           backwards
    8.71 -* warpl  - the warp limit is the maximum time a domain can run warped for
    8.72 -* warpu  - the unwarp requirement is the minimum time a domain must
    8.73 -           run unwarped for before it can warp again
    8.74 -
    8.75 -BVT also has the following global parameter (set using
    8.76 -xc_dom_control.py cpu_bvtslice):
    8.77 -
    8.78 -* ctx_allow - the context switch allowance is similar to the "quantum"
    8.79 -              in traditional schedulers.  It is the minimum time that
    8.80 -              a scheduled domain will be allowed to run before be
    8.81 -              pre-empted.  This prevents thrashing of the CPU.
    8.82 -
    8.83 -BVT can now be selected by passing the 'sched=bvt' argument to Xen at
    8.84 -boot-time and is the default scheduler if no 'sched' argument is
    8.85 -supplied.
    8.86 -
    8.87 -Atropos
    8.88 --------
    8.89 -
    8.90 -Atropos is a scheduler originally developed for the Nemesis multimedia
    8.91 -operating system.  Atropos can be used to reserve absolute shares of
    8.92 -the CPU.  It also includes some features to improve the efficiency of
    8.93 -domains that block for I/O and to allow spare CPU time to be shared
    8.94 -out.
    8.95 -
    8.96 -The Atropos algorithm has the following parameters for each domain
    8.97 -(set using xc_dom_control.py cpu_atropos_set):
    8.98 -
    8.99 - * slice    - The length of time per period that a domain is guaranteed.
   8.100 - * period   - The period over which a domain is guaranteed to receive
   8.101 -              its slice of CPU time.
   8.102 - * latency  - The latency hint is used to control how soon after
   8.103 -              waking up a domain should be scheduled.
   8.104 - * xtratime - This is a true (1) / false (0) flag that specifies whether
   8.105 -             a domain should be allowed a share of the system slack time.
   8.106 -
   8.107 -Every domain has an associated period and slice.  The domain should
   8.108 -receive 'slice' nanoseconds every 'period' nanoseconds.  This allows
   8.109 -the administrator to configure both the absolute share of the CPU a
   8.110 -domain receives and the frequency with which it is scheduled.  When
   8.111 -domains unblock, their period is reduced to the value of the latency
   8.112 -hint (the slice is scaled accordingly so that they still get the same
   8.113 -proportion of the CPU).  For each subsequent period, the slice and
   8.114 -period times are doubled until they reach their original values.
   8.115 -
   8.116 -Atropos is selected by adding 'sched=atropos' to Xen's boot-time
   8.117 -arguments.
   8.118 -
   8.119 -Note: don't overcommit the CPU when using Atropos (i.e. don't reserve
   8.120 -more CPU than is available - the utilisation should be kept to
   8.121 -slightly less than 100% in order to ensure predictable behaviour).
   8.122 -
   8.123 -Round-Robin
   8.124 ------------
   8.125 -
   8.126 -The Round-Robin scheduler is provided as a simple example of Xen's
   8.127 -internal scheduler API.  For production systems, one of the other
   8.128 -schedulers should be used, since they are more flexible and more
   8.129 -efficient.
   8.130 -
   8.131 -The Round-robin scheduler has one global parameter (set using
   8.132 -xc_dom_control.py cpu_rrobin_slice):
   8.133 -
   8.134 - * rr_slice - The time for which each domain runs before the next
   8.135 -              scheduling decision is made.
   8.136 -
   8.137 -The Round-Robin scheduler can be selected by adding 'sched=rrobin' to
   8.138 -Xen's boot-time arguments.
     9.1 --- a/docs/VBD-HOWTO.txt	Wed Jun 09 12:41:57 2004 +0000
     9.2 +++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
     9.3 @@ -1,437 +0,0 @@
     9.4 -Virtual Block Devices / Virtual Disks in Xen - HOWTO
     9.5 -====================================================
     9.6 -
     9.7 -HOWTO for Xen 1.2
     9.8 -
     9.9 -Mark A. Williamson (mark.a.williamson@intel.com)
    9.10 -(C) Intel Research Cambridge 2004
    9.11 -
    9.12 -Introduction
    9.13 -------------
    9.14 -
    9.15 -This document describes the new Virtual Block Device (VBD) and Virtual Disk
    9.16 -features available in Xen release 1.2.  First, a brief introduction to some
    9.17 -basic disk concepts on a Xen system:
    9.18 -
    9.19 -Virtual Block Devices (VBDs):
    9.20 -	VBDs are the disk abstraction provided by Xen.  All XenoLinux disk accesses
    9.21 -	go through the VBD driver.  Using the VBD functionality, it is possible
    9.22 -	to selectively grant domains access to portions of the physical disks
    9.23 -	in the system.
    9.24 -
    9.25 -	A virtual block device can also consist of multiple extents from the
    9.26 -	physical disks in the system, allowing them to be accessed as a single
    9.27 -	uniform device from the domain with access to that VBD.  The
    9.28 -	functionality is somewhat similar to that underpinning LVM, since
    9.29 -	you can combine multiple regions from physical devices into a single
    9.30 -	logical device, from the point of view of a guest virtual machine.
    9.31 -
    9.32 -	Everyone who boots Xen / XenoLinux from a hard drive uses VBDs
    9.33 -	but for some uses they can almost be ignored.
    9.34 -
    9.35 -Virtual Disks (VDs):
    9.36 -	VDs are an abstraction built on top of the functionality provided by
    9.37 -	VBDs.  The VD management code maintains a "free pool" of disk space on
    9.38 -	the system that has been reserved for use with VDs.  The tools can
    9.39 -	automatically allocate collections of extents from this free pool to
    9.40 -	create "virtual disks" on demand.
    9.41 -
    9.42 -	VDs can then be used just like normal disks by domains.  VDs appear
    9.43 -	just like any other disk to guest domains, since they use the same VBD
    9.44 -	abstraction, as provided by Xen.
    9.45 -
    9.46 -	Using VDs is optional, since it's always possible to dedicate
    9.47 -	partitions, or entire disks to your virtual machines.  VDs are handy
    9.48 -	when you have a dynamically changing set of virtual machines and you
    9.49 -	don't want to have to keep repartitioning in order to provide them with
    9.50 -	disk space.
    9.51 -
    9.52 -	Virtual Disks are rather like "logical volumes" in LVM.
    9.53 -
    9.54 -If that didn't all make sense, it doesn't matter too much ;-)  Using the
    9.55 -functionality is fairly straightforward and some examples will clarify things.
    9.56 -The text below expands a bit on the concepts involved, finishing up with a
    9.57 -walk-through of some simple virtual disk management tasks.
    9.58 -
    9.59 -
    9.60 -Virtual Block Devices
    9.61 ----------------------
    9.62 -
    9.63 -Before covering VD management, it's worth discussing some aspects of the VBD
    9.64 -functionality that will be useful to know.
    9.65 -
    9.66 -A VBD is made up of a number of extents from physical disk devices.  The
    9.67 -extents for a VBD don't have to be contiguous, or even on the same device.  Xen
    9.68 -performs address translation so that they appear as a single contiguous
    9.69 -device to a domain.
    9.70 -
    9.71 -When the VBD layer is used to give access to entire drives or entire
    9.72 -partitions, the VBDs simply consist of a single extent that corresponds to the
    9.73 -drive or partition used.  Lists of extents are usually only used when virtual
    9.74 -disks (VDs) are being used.
    9.75 -
    9.76 -Xen 1.2 and its associated XenoLinux release support automatic registration /
    9.77 -removal of VBDs.  It has always been possible to add a VBD to a running
    9.78 -XenoLinux domain but it was then necessary to run the "xen_vbd_refresh" tool in
    9.79 -order for the new device to be detected.  Nowadays, when a VBD is added, the
    9.80 -domain it's added to automatically registers the disk, with no special action
    9.81 -by the user being required.
    9.82 -
    9.83 -Note that it is possible to use the VBD functionality to allow multiple domains
    9.84 -write access to the same areas of disk.  This is almost always a bad thing!
    9.85 -The provided example scripts for creating domains do their best to check that
    9.86 -disk areas are not shared unsafely and will catch many cases of this.  Setting
    9.87 -the vbd_expert variable in config files for xc_dom_create.py controls how
    9.88 -unsafe it allows VBD mappings to be - 0 (read only sharing allowed) should be
    9.89 -right for most people ;-).  Level 1 attempts to allow at most one writer to any
    9.90 -area of disk.  Level 2 allows multiple writers (i.e. anything!).
    9.91 -
    9.92 -
    9.93 -Virtual Disk Management
    9.94 ------------------------
    9.95 -
    9.96 -The VD management code runs entirely in user space.  The code is written in
    9.97 -Python and can therefore be accessed from custom scripts, as well as from the
    9.98 -convenience scripts provided.  The underlying VD database is a SQLite database
    9.99 -in /var/db/xen_vdisks.sqlite.
   9.100 -
   9.101 -Most virtual disk management can be performed using the xc_vd_tool.py script
   9.102 -provided in the tools/examples/ directory of the source tree.  It supports the
   9.103 -following operations:
   9.104 -
   9.105 -initialise -	     "Formats" a partition or disk device for use storing
   9.106 -		     virtual disks.  This does not actually write data to the
   9.107 -		     specified device.  Rather, it adds the device to the VD
   9.108 -		     free-space pool, for later allocation.
   9.109 -
   9.110 -		     You should only add devices that correspond directly to
   9.111 -		     physical disks / partitions - trying to use a VBD that you
   9.112 -		     have created yourself as part of the free space pool has
   9.113 -		     undefined (possibly nasty) results.
   9.114 -
   9.115 -create -	     Creates a virtual disk of specified size by allocating space
   9.116 -		     from the free space pool.  The virtual disk is identified
   9.117 -		     in future by the unique ID returned by this script.
   9.118 -
   9.119 -		     The disk can be given an expiry time, if desired.  For
   9.120 -		     most users, the best idea is to specify a time of 0 (which
   9.121 -		     has the special meaning "never expire") and then
   9.122 -		     explicitly delete the VD when finished with it -
   9.123 -		     otherwise, VDs will disappear if allowed to expire.
   9.124 -
   9.125 -delete -	     Explicitly delete a VD.  Makes it disappear immediately!
   9.126 -
   9.127 -setexpiry -	     Allows the expiry time of a (not yet expired) virtual disk
   9.128 -		     to be modified.  Be aware the VD will disappear when the
   9.129 -		     time has expired.
   9.130 -
   9.131 -enlarge -            Increase the allocation of space to a virtual disk.
   9.132 -		     Currently this will not be immediately visible to running
   9.133 -		     domain(s) using it.  You can make it visible by destroying
   9.134 -		     the corresponding VBDs and then using xc_dom_control.py to
   9.135 -		     add them to the domain again.  Note: doing this to
   9.136 -		     filesystems that are in use may well cause errors in the
   9.137 -		     guest Linux, or even a crash although it will probably be
   9.138 -		     OK if you stop the domain before updating the VBD and
   9.139 -		     restart afterwards.
   9.140 -
   9.141 -import -	     Allocate a virtual disk and populate it with the contents of
   9.142 -		     some disk file.  This can be used to import root file system
   9.143 -		     images or to restore backups of virtual disks, for instance.
   9.144 -
   9.145 -export -	     Write the contents of a virtual disk out to a disk file.
   9.146 -		     Useful for creating disk images for use elsewhere, such as
   9.147 -		     standard root file systems and backups.
   9.148 -
   9.149 -list -		     List the non-expired virtual disks currently available in the
   9.150 -		     system.
   9.151 -
   9.152 -undelete -	     Attempts to recover an expired (or deleted) virtual disk.
   9.153 -
   9.154 -freespace -	     Get the free space (in megabytes) available for allocating
   9.155 -		     new virtual disk extents.
   9.156 -
   9.157 -The functionality provided by these scripts is also available directly from
   9.158 -Python functions in the xenctl.utils module - you can use this functionality in
   9.159 -your own scripts.
   9.160 -
   9.161 -Populating VDs:
   9.162 -
   9.163 -Once you've created a VD, you might want to populate it from DOM0 (for
   9.164 -instance, to put a root file system onto it for a guest domain).  This can be
   9.165 -done by creating a VBD for dom0 to access the VD through - this is discussed
   9.166 -below.
   9.167 -
   9.168 -More detail on how virtual disks work:
   9.169 -
   9.170 -When you "format" a device for virtual disks, the device is logically split up
   9.171 -into extents.  These extents are recorded in the Virtual Disk Management
   9.172 -database in /var/db/xen_vdisks.sqlite.
   9.173 -
   9.174 -When you use xc_vd_tool.py to add create a virtual disk, some of the extents in
   9.175 -the free space pool are reallocated for that virtual disk and a record for that
   9.176 -VD is added to the database.  When VDs are mapped into domains as VBDs, the
   9.177 -system looks up the allocated extents for the virtual disk in order to set up
   9.178 -the underlying VBD.
   9.179 -
   9.180 -Free space is identified by the fact that it belongs to an "expired" disk.
   9.181 -When "initialising" with xc_vd_tool.py adds a real device to the free pool, it
   9.182 -actually divides the device into extents and adds them to an already-expired
   9.183 -virtual disk.  The allocated device is not written to during this operation -
   9.184 -its availability is simply recorded into the virtual disks database.
   9.185 -
   9.186 -If you set an expiry time on a VD, its extents will be liable to be reallocated
   9.187 -to new VDs as soon as that expiry time runs out.  Therefore, be careful when
   9.188 -setting expiry times!  Many users will find it simplest to set all VDs to not
   9.189 -expire automatically, then explicitly delete them later on.
   9.190 -
   9.191 -Deleted / expired virtual disks may sometimes be undeleted - currently this
   9.192 -only works when none of the virtual disk's extents have been reallocated to
   9.193 -other virtual disks, since that's the only situation where the disk is likely
   9.194 -to be fully intact.  You should try undeletion as soon as you realise you've
   9.195 -mistakenly deleted (or allowed to expire) a virtual disk.  At some point in the
   9.196 -future, an "unsafe" undelete which can recover what remains of partially
   9.197 -reallocated virtual disks may also be implemented.
   9.198 -
   9.199 -Security note:
   9.200 -
   9.201 -The disk space for VDs is not zeroed when it is initially added to the free
   9.202 -space pool OR when a VD expires OR when a VD is created.  Therefore, if this is
   9.203 -not done manually it is possible for a domain to read a VD to determine what
   9.204 -was written by previous owners of its constituent extents.  If this is a
   9.205 -problem, users should manually clean VDs in some way either on allocation, or
   9.206 -just before deallocation (automated support for this may be added at a later
   9.207 -date).
   9.208 -
   9.209 -
   9.210 -Side note: The xvd* devices
   9.211 ----------------------------
   9.212 -
   9.213 -The examples in this document make frequent use of the xvd* device nodes for
   9.214 -representing virtual block devices.  It is not a requirement to use these with
   9.215 -Xen, since VBDs can be mapped to any IDE or SCSI device node in the system.
   9.216 -Changing the the references to xvd* nodes in the examples below to refer to
   9.217 -some unused hd* or sd* node would also be valid.
   9.218 -
   9.219 -They can be useful when accessing VBDs from dom0, since binding VBDs to xvd*
   9.220 -devices under will avoid clashes with real IDE or SCSI drives.
   9.221 -
   9.222 -There is a shell script provided in tools/misc/xen-mkdevnodes to create these
   9.223 -nodes.  Specify on the command line the directory that the nodes should be
   9.224 -placed under (e.g. /dev):
   9.225 -
   9.226 -> cd {root of Xen source tree}/tools/misc/
   9.227 -> ./xen-mkdevnodes /dev
   9.228 -
   9.229 -
   9.230 -Dynamically Registering VBDs
   9.231 -----------------------------
   9.232 -
   9.233 -The domain control tool (xc_dom_control.py) includes the ability to add and
   9.234 -remove VBDs to / from running domains.  As usual, the command format is:
   9.235 -
   9.236 -xc_dom_control.py [operation] [arguments]
   9.237 -
   9.238 -The operations (and their arguments) are as follows:
   9.239 -
   9.240 -vbd_add dom uname dev mode - Creates a VBD corresponding to either a physical
   9.241 -		             device or a virtual disk and adds it as a
   9.242 -		             specified device under the target domain, with
   9.243 -		             either read or write access.
   9.244 -
   9.245 -vbd_remove dom dev	   - Removes the VBD associated with a specified device
   9.246 -			     node from the target domain.
   9.247 -
   9.248 -These scripts are most useful when populating VDs.  VDs can't be populated
   9.249 -directly, since they don't correspond to real devices.  Using:
   9.250 -
   9.251 -  xc_dom_control.py vbd_add 0 vd:your_vd_id /dev/whatever w
   9.252 -
   9.253 -You can make a virtual disk available to DOM0.  Sensible devices to map VDs to
   9.254 -in DOM0 are the /dev/xvd* nodes, since that makes it obvious that they are Xen
   9.255 -virtual devices that don't correspond to real physical devices.
   9.256 -
   9.257 -You can then format, mount and populate the VD through the nominated device
   9.258 -node.  When you've finished, use:
   9.259 -
   9.260 -  xc_dom_control.py vbd_remove 0 /dev/whatever
   9.261 -
   9.262 -To revoke DOM0's access to it.  It's then ready for use in a guest domain.
   9.263 -
   9.264 -
   9.265 -
   9.266 -You can also use this functionality to grant access to a physical device to a
   9.267 -guest domain - you might use this to temporarily share a partition, or to add
   9.268 -access to a partition that wasn't granted at boot time.
   9.269 -
   9.270 -When playing with VBDs, remember that in general, it is only safe for two
   9.271 -domains to have access to a file system if they both have read-only access.  You
   9.272 -shouldn't be trying to share anything which is writable, even if only by one
   9.273 -domain, unless you're really sure you know what you're doing!
   9.274 -
   9.275 -
   9.276 -Granting access to real disks and partitions
   9.277 ---------------------------------------------
   9.278 -
   9.279 -During the boot process, Xen automatically creates a VBD for each physical disk
   9.280 -and gives Dom0 read / write access to it.  This makes it look like Dom0 has
   9.281 -normal access to the disks, just as if Xen wasn't being used - in reality, even
   9.282 -Dom0 talks to disks through Xen VBDs.
   9.283 -
   9.284 -To give another domain access to a partition or whole disk then you need to
   9.285 -create a corresponding VBD for that partition, for use by that domain.  As for
   9.286 -virtual disks, you can grant access to a running domain, or specify that the
   9.287 -domain should have access when it is first booted.
   9.288 -
   9.289 -To grant access to a physical partition or disk whilst a domain is running, use
   9.290 -the xc_dom_control.py script - the usage is very similar to the case of adding
   9.291 -access virtual disks to a running domain (described above).  Specify the device
   9.292 -as "phy:device", where device is the name of the device as seen from domain 0,
   9.293 -or from normal Linux without Xen.  For instance:
   9.294 -
   9.295 -> xc_dom_control.py vbd_add 2 phy:hdc /dev/whatever r
   9.296 -
   9.297 -Will grant domain 2 read-only access to the device /dev/hdc (as seen from Dom0
   9.298 -/ normal Linux running on the same machine - i.e. the master drive on the
   9.299 -secondary IDE chain), as /dev/whatever in the target domain.
   9.300 -
   9.301 -Note that you can use this within domain 0 to map disks / partitions to other
   9.302 -device nodes within domain 0.  For instance, you could map /dev/hda to also be
   9.303 -accessible through /dev/xvda.  This is not generally recommended, since if you
   9.304 -(for instance) mount both device nodes read / write you could cause corruption
   9.305 -to the underlying filesystem.  It's also quite confusing ;-)
   9.306 -
   9.307 -To grant a domain access to a partition or disk when it boots, the appropriate
   9.308 -VBD needs to be created before the domain is started.  This can be done very
   9.309 -easily using the tools provided.  To specify this to the xc_dom_create.py tool
   9.310 -(either in a startup script or on the command line) use triples of the format:
   9.311 -
   9.312 -  phy:dev,target_dev,perms
   9.313 -
   9.314 -Where dev is the device name as seen from Dom0, target_dev is the device you
   9.315 -want it to appear as in the target domain and perms is 'w' if you want to give
   9.316 -write privileges, or 'r' otherwise.
   9.317 -
   9.318 -These may either be specified on the command line or in an initialisation
   9.319 -script.  For instance, to grant the same access rights as described by the
   9.320 -command example above, you would use the triple:
   9.321 -
   9.322 -  phy:hdc,/dev/whatever,r
   9.323 -
   9.324 -If you are using a config file, then you should add this triple into the
   9.325 -vbd_list variable, for instance using the line:
   9.326 -
   9.327 -  vbd_list = [ ('phy:dev', 'hdc', 'r') ]
   9.328 -
   9.329 -(Note that you need to use quotes here, since config files are really small
   9.330 -Python scripts.)
   9.331 -
   9.332 -To specify the mapping on the command line, you'd use the -d switch and supply
   9.333 -the triple as the argument, e.g.:
   9.334 -
   9.335 -> xc_dom_create.py [other arguments] -d phy:hdc,/dev/whatever,r
   9.336 -
   9.337 -(You don't need to explicitly quote things in this case.)
   9.338 -
   9.339 -
   9.340 -Walk-through: Booting a domain from a VD
   9.341 -----------------------------------------
   9.342 -
   9.343 -As an example, here is a sequence of commands you might use to create a virtual
   9.344 -disk, populate it with a root file system and boot a domain from it.  These
   9.345 -steps assume that you've installed the example scripts somewhere on your PATH -
   9.346 -if you haven't done that, you'll need to specify a fully qualified pathname in
   9.347 -the examples below.  It is also assumed that you know how to use the
   9.348 -xc_dom_create.py tool (apart from configuring virtual disks!)
   9.349 -
   9.350 -[ This example is intended only for users of virtual disks (VDs).  You don't
   9.351 -need to follow this example if you'll be booting a domain from a dedicated
   9.352 -partition, since you can create that partition and populate it, directly from
   9.353 -Dom0, as normal. ]
   9.354 -
   9.355 -First, if you haven't done so already, you'll initialise the free space pool by
   9.356 -adding a real partition to it.  The details are stored in the database, so
   9.357 -you'll only need to do it once.  You can also use this command to add further
   9.358 -partitions to the existing free space pool.
   9.359 -
   9.360 -> xc_vd_tool.py format /dev/<real partition>
   9.361 -
   9.362 -Now you'll want to allocate the space for your virtual disk.  Do so using the
   9.363 -following, specifying the size in megabytes.
   9.364 -
   9.365 -> xc_vd_tool.py create <size in megabytes>
   9.366 -
   9.367 -At this point, the program will tell you the virtual disk ID.  Note it down, as
   9.368 -it is how you will identify the virtual device in future.
   9.369 -
   9.370 -If you don't want the VD to be bootable (i.e. you're booting a domain from some
   9.371 -other medium and just want it to be able to access this VD), you can simply add
   9.372 -it to the vbd_list used by xc_dom_create.py, either by putting it in a config
   9.373 -file or by specifying it on the command line.  Formatting / populating of the
   9.374 -VD could then done from that domain once it's started.
   9.375 -
   9.376 -If you want to boot off your new VD as well then you need to populate it with a
   9.377 -standard Linux root filesystem.  You'll need to temporarily add the VD to DOM0
   9.378 -in order to do this.  To give DOM0 r/w access to the VD, use the following
   9.379 -command line, substituting the ID you got earlier.
   9.380 -
   9.381 -> xc_dom_control.py vbd_add 0 vd:<id> /dev/xvda w
   9.382 -
   9.383 -This attaches the VD to the device /dev/xvda in domain zero, with read / write
   9.384 -privileges - you can use other devices nodes if you choose too.
   9.385 -
   9.386 -Now make a filesystem on this device, mount it and populate it with a root
   9.387 -filesystem.  These steps are exactly the same as under normal Linux.  When
   9.388 -you've finished, unmount the filesystem again.
   9.389 -
   9.390 -You should now remove the VD from DOM0.  This will prevent you accidentally
   9.391 -changing it in DOM0, whilst the guest domain is using it (which could cause
   9.392 -filesystem corruption, and confuse Linux).
   9.393 -
   9.394 -> xc_dom_control.py vbd_remove 0 /dev/xvda
   9.395 -
   9.396 -It should now be possible to boot a guest domain from the VD.  To do this, you
   9.397 -should specify the the VD's details in some way so that xc_dom_create.py will
   9.398 -be able to set up the corresponding VBD for the domain to access.  If you're
   9.399 -using a config file, you should include:
   9.400 -
   9.401 -  ('vd:<id>', '/dev/whatever', 'w')
   9.402 -
   9.403 -In the vbd_list, substituting the appropriate virtual disk ID, device node and
   9.404 -read / write setting.
   9.405 -
   9.406 -To specify access on the command line, as you start the domain, you would use
   9.407 -the -d switch (note that you don't need to use quote marks here):
   9.408 -
   9.409 -> xc_dom_create.py [other arguments] -d vd:<id>,/dev/whatever,w
   9.410 -
   9.411 -To tell Linux which device to boot from, you should either include:
   9.412 -
   9.413 -  root=/dev/whatever
   9.414 -
   9.415 -in your cmdline_root in the config file, or specify it on the command line,
   9.416 -using the -R option:
   9.417 -
   9.418 -> xc_dom_create.py [other arguments] -R root=/dev/whatever
   9.419 -
   9.420 -That should be it: sit back watch your domain boot off its virtual disk!
   9.421 -
   9.422 -
   9.423 -Getting help
   9.424 -------------
   9.425 -
   9.426 -The main source of help using Xen is the developer's e-mail list:
   9.427 -<xen-devel@lists.sourceforge.net>.  The developers will help with problems,
   9.428 -listen to feature requests and do bug fixes.  It is, however, helpful if you
   9.429 -can look through the mailing list archives and HOWTOs provided to make sure
   9.430 -your question is not answered there.  If you post to the list, please provide
   9.431 -as much information as possible about your setup and your problem.
   9.432 -
   9.433 -There is also a general Xen FAQ, kindly started by Jan van Rensburg, which (at
   9.434 -time of writing) is located at: <http://xen.epiuse.com/xen-faq.txt>.
   9.435 -
   9.436 -Contributing
   9.437 -------------
   9.438 -
   9.439 -Patches and extra documentation are also welcomed ;-) and should also be posted
   9.440 -to the xen-devel e-mail list.
    10.1 --- a/docs/Xen-HOWTO.txt	Wed Jun 09 12:41:57 2004 +0000
    10.2 +++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
    10.3 @@ -1,367 +0,0 @@
    10.4 -###########################################
    10.5 -Xen HOWTO
    10.6 -
    10.7 -University of Cambridge Computer Laboratory
    10.8 -
    10.9 -http://www.cl.cam.ac.uk/netos/xen
   10.10 -#############################
   10.11 -
   10.12 -
   10.13 -Get Xen Source Code
   10.14 -=============================
   10.15 -
   10.16 -The public master BK repository for the 1.2 release lives at:
   10.17 -'bk://xen.bkbits.net/xeno-1.2.bk'
   10.18 -
   10.19 -To fetch a local copy, first download the BitKeeper tools at:
   10.20 -http://www.bitmover.com/download with username 'bitkeeper' and
   10.21 -password 'get bitkeeper'.
   10.22 -
   10.23 -Then install the tools and then run:
   10.24 -# bk clone bk://xen.bkbits.net/xeno-1.2.bk
   10.25 -
   10.26 -Under your current directory, a new directory named 'xeno-1.2.bk' has
   10.27 -been created, which contains all the necessary source codes for the
   10.28 -Xen hypervisor and Linux guest OSes.
   10.29 -
   10.30 -To get newest changes to the repository, run
   10.31 -# cd xeno-1.2.bk
   10.32 -# bk pull
   10.33 -
   10.34 -
   10.35 -Build Xen
   10.36 -=============================
   10.37 -
   10.38 -Hint: To see how to build Xen and all the control tools, inspect the
   10.39 -tools/misc/xen-clone script in the BK repository. This script can be
   10.40 -used to clone the repository and perform a full build.
   10.41 -
   10.42 -To build Xen manually:
   10.43 -
   10.44 -# cd xeno-1.2.bk/xen
   10.45 -# make clean
   10.46 -# make
   10.47 -
   10.48 -This will (should) produce a file called 'xen' in the current
   10.49 -directory.  This is the ELF 32-bit LSB executable file of Xen.  You
   10.50 -can also find a gzip version, named 'xen.gz'.
   10.51 -
   10.52 -To install the built files on your server under /usr, type 'make
   10.53 -install' at the root of the BK repository. You will need to be root to
   10.54 -do this!
   10.55 -
   10.56 -Hint: There is also a 'make dist' rule which copies built files to an
   10.57 -install directory just outside the BK repo; if this suits your setup,
   10.58 -go for it.
   10.59 -
   10.60 -
   10.61 -Build Linux as a Xen guest OS
   10.62 -==============================
   10.63 -
   10.64 -This is a little more involved since the repository only contains a
   10.65 -"sparse" tree -- this is essentially an 'overlay' on a standard linux
   10.66 -kernel source tree. It contains only those files currently 'in play'
   10.67 -which are either modified versions of files in the vanilla linux tree,
   10.68 -or brand new files specific to the Xen port.
   10.69 -
   10.70 -So, first you need a vanilla linux-2.4.24 tree, which is located at:
   10.71 -http://www.kernel.org/pub/linux/kernel/v2.4
   10.72 -
   10.73 -Then:
   10.74 -  # mv linux-2.4.24.tar.gz /xeno-1.2.bk
   10.75 -  # cd /xeno-1.2.bk
   10.76 -  # tar -zxvf linux-2.4.24.tar.gz
   10.77 -
   10.78 -You'll find a new directory 'linux-2.4.24' which contains all
   10.79 -the vanilla Linux 2.4.24 kernel source codes.
   10.80 -
   10.81 -Hint: You should choose the vanilla linux kernel tree that has the
   10.82 -same version as the "sparse" tree.
   10.83 -
   10.84 -Next, you need to 'overlay' this sparse tree on the full vanilla Linux
   10.85 -kernel tree:
   10.86 -
   10.87 -  # cd /xeno-1.2.bk/xenolinux-2.4.24-sparse
   10.88 -  # ./mkbuildtree ../linux-2.4.24
   10.89 -
   10.90 -Finally, rename the buildtree since it is now a 'xenolinux' buildtree. 
   10.91 -
   10.92 -  # cd /xeno-1.2.bk
   10.93 -  # mv linux-2.4.24 xenolinux-2.4.24
   10.94 -
   10.95 -Now that the buildtree is there, you can build the xenolinux kernel.
   10.96 -The default configuration should work fine for most people (use 'make
   10.97 -oldconfig') but you can customise using one of the other config tools
   10.98 -if you want.
   10.99 -
  10.100 -  # cd /xeno-1.2.bk/xenolinux-2.4.24
  10.101 -  # ARCH=xen make oldconfig   { or menuconfig, or xconfig, or config }  
  10.102 -  # ARCH=xen make dep
  10.103 -  # ARCH=xen make bzImage
  10.104 -
  10.105 -Assuming the build works, you'll end up with
  10.106 -/xeno-1.2.bk/xenolinux-2.4.24/arch/xen/boot/xenolinux.gz. This is the
  10.107 -gzip version of XenoLinux kernel image.
  10.108 -
  10.109 -
  10.110 -Build the Domain Control Tools
  10.111 -==============================
  10.112 -
  10.113 -Under '/xeno-1.2.bk/tools', there are three sub-directories:
  10.114 -'balloon', 'xc' and 'misc', each containing
  10.115 -a group of tools. You can enter any of the four sub-directories
  10.116 -and type 'make' to compile the corresponding group of tools.
  10.117 -Or you can type 'make' under '/xeno-1.2.bk/tools' to compile
  10.118 -all the tools.
  10.119 -
  10.120 -In order to compile the control-interface library in 'xc' you must
  10.121 -have zlib and development headers installed. Also you will need at
  10.122 -least Python v2.2. 
  10.123 -
  10.124 -'make install' in the tools directory will place executables and
  10.125 -libraries in /usr/bin and /usr/lib. You will need to be root to do this!
  10.126 -
  10.127 -As noted earlier, 'make dist' installs files to a local 'install'
  10.128 -directory just outside the BK repository. These files will then need
  10.129 -to be installed manually onto the server.
  10.130 -
  10.131 -The Example Scripts
  10.132 -===================
  10.133 -
  10.134 -The scripts in tools/examples/ are generally useful for
  10.135 -administering a Xen-based system.  You can install them by running
  10.136 -'make install' in that directory.
  10.137 -
  10.138 -The python scripts (*.py) are the main tools for controlling
  10.139 -Xen domains.
  10.140 -
  10.141 -'defaults' and 'democd' are example configuration files for starting
  10.142 -new domains.
  10.143 -
  10.144 -'xendomains' is a Sys-V style init script for starting and stopping
  10.145 -Xen domains when the system boots / shuts down.
  10.146 -
  10.147 -These will be discussed below in more detail.
  10.148 -
  10.149 -
  10.150 -Installation
  10.151 -==============================
  10.152 -
  10.153 -First:
  10.154 -# cp /xen-1.2.bk/xen/xen.gz /boot/xen.gz
  10.155 -# cp /xen-1.2.bk/xenolinux-2.4.24/arch/xen/boot/xenolinux.gz /boot/xenolinux.gz
  10.156 -
  10.157 -Second, you must have 'GNU Grub' installed. Then you need to edit
  10.158 -the Grub configuration file '/boot/grub/menu.lst'.
  10.159 -
  10.160 -A typical Grub menu option might look like:
  10.161 -
  10.162 -title Xen 1.2 / XenoLinux 2.4.24
  10.163 -        kernel /boot/xen.gz dom0_mem=131072 com1=115200,8n1 noht
  10.164 -        module /boot/xenolinux.gz root=/dev/sda4 ro console=tty0
  10.165 -
  10.166 -The first line specifies which Xen image to use, and what command line
  10.167 -arguments to pass to Xen. In this case we set the maximum amount of
  10.168 -memory to allocate to domain0, and enable serial I/O at 115200 baud.
  10.169 -We could also disable smp support (nosmp) or disable hyper-threading
  10.170 -support (noht). If you have multiple network interface you can use
  10.171 -ifname=ethXX to select which one to use. If your network card is
  10.172 -unsupported, use ifname=dummy
  10.173 -
  10.174 -The second line specifies which XenoLinux image to use, and the
  10.175 -standard linux command line arguments to pass to the kernel. In this
  10.176 -case, we're configuring the root partition and stating that it should
  10.177 -(initially) be mounted read-only (normal practice). 
  10.178 -
  10.179 -The following is a list of command line arguments to pass to Xen:
  10.180 -
  10.181 - ignorebiostables Disable parsing of BIOS-supplied tables. This may
  10.182 -                  help with some chipsets that aren't fully supported
  10.183 -                  by Xen. If you specify this option then ACPI tables are
  10.184 -                  also ignored, and SMP support is disabled.
  10.185 -
  10.186 - noreboot         Don't reboot the machine automatically on errors.
  10.187 -                  This is useful to catch debug output if you aren't
  10.188 -                  catching console messages via the serial line.
  10.189 -
  10.190 - nosmp            Disable SMP support.
  10.191 -                  This option is implied by 'ignorebiostables'.
  10.192 -
  10.193 - noacpi           Disable ACPI tables, which confuse Xen on some chipsets.
  10.194 -                  This option is implied by 'ignorebiostables'.
  10.195 -
  10.196 - watchdog         Enable NMI watchdog which can report certain failures.
  10.197 -
  10.198 - noht             Disable Hyperthreading.
  10.199 -
  10.200 - ifname=ethXX     Select which Ethernet interface to use.
  10.201 -
  10.202 - ifname=dummy     Don't use any network interface.
  10.203 -
  10.204 - com1=<baud>,DPS[,<io_base>,<irq>]
  10.205 - com2=<baud>,DPS[,<io_base>,<irq>]
  10.206 -                  Xen supports up to two 16550-compatible serial ports.
  10.207 -                  For example: 'com1=9600,8n1,0x408,5' maps COM1 to a
  10.208 -                  9600-baud port, 8 data bits, no parity, 1 stop bit,
  10.209 -                  I/O port base 0x408, IRQ 5.
  10.210 -                  If the I/O base and IRQ are standard (com1:0x3f8,4;
  10.211 -                  com2:0x2f8,3) then they need not be specified.
  10.212 -
  10.213 - console=<specifier list>
  10.214 -                  Specify the destination for Xen console I/O.
  10.215 -                  This is a comma-separated list of, for example:
  10.216 -                   vga:  use VGA console and allow keyboard input
  10.217 -                   com1: use serial port com1
  10.218 -                   com2H: use serial port com2. Transmitted chars will
  10.219 -                          have the MSB set. Received chars must have
  10.220 -                          MSB set.
  10.221 -                   com2L: use serial port com2. Transmitted chars will
  10.222 -                          have the MSB cleared. Received chars must
  10.223 -                          have MSB cleared.
  10.224 -                  The latter two examples allow a single port to be
  10.225 -                  shared by two subsystems (eg. console and
  10.226 -                  debugger). Sharing is controlled by MSB of each
  10.227 -                  transmitted/received character.
  10.228 - [NB. Default for this option is 'com1,tty']
  10.229 -
  10.230 - dom0_mem=xxx     Set the maximum amount of memory for domain0.
  10.231 -
  10.232 - tbuf_size=xxx    Set the size of the per-cpu trace buffers, in pages
  10.233 -                  (default 1).  Note that the trace buffers are only
  10.234 -                  enabled in debug builds.  Most users can ignore
  10.235 -                  this feature completely.
  10.236 -
  10.237 - sched=xxx        Select the CPU scheduler Xen should use.  The current
  10.238 -                  possibilities are 'bvt', 'atropos' and 'rrobin'.  The
  10.239 -                  default is 'bvt'.  For more information see
  10.240 -                  Sched-HOWTO.txt.
  10.241 -
  10.242 -Boot into Domain 0
  10.243 -==============================
  10.244 -
  10.245 -Reboot your computer; After selecting the kernel to boot, stand back
  10.246 -and watch Xen boot, closely followed by "domain 0" running the
  10.247 -XenoLinux kernel.  Depending on which root partition you have assigned
  10.248 -to XenoLinux kernel in Grub configuration file, you can use the
  10.249 -corresponding username / password to log in.
  10.250 -
  10.251 -Once logged in, it should look just like any regular linux box. All
  10.252 -the usual tools and commands should work as per usual.
  10.253 -
  10.254 -
  10.255 -Start New Domains
  10.256 -==============================
  10.257 -
  10.258 -You must be 'root' to start new domains.
  10.259 -
  10.260 -Make sure you have successfully configured at least one
  10.261 -physical network interface. Then:
  10.262 -
  10.263 -# xen_nat_enable
  10.264 -
  10.265 -The xc_dom_create.py program is useful for starting Xen domains.
  10.266 -You can specify configuration files using the -f switch on the command
  10.267 -line.  The default configuration is in /etc/xc/defaults.  You can
  10.268 -create custom versions of this to suit your local configuration.
  10.269 -
  10.270 -You can override the settings in a configuration file using command
  10.271 -line arguments to xc_dom_create.py.  However, you may find it simplest
  10.272 -to create a separate configuration file for each domain you start.
  10.273 -
  10.274 -xc_dom_create.py will print the local TCP port to which you should
  10.275 -connect to perform console I/O. A suitable console client is provided
  10.276 -by the Python module xenctl.console_client: running this module from
  10.277 -the command line with <host> and <port> parameters will start a
  10.278 -terminal session. This module is also installed as /usr/bin/xencons,
  10.279 -from a copy in tools/misc/xencons.  An alternative to manually running
  10.280 -a terminal client is to specify '-c' to xc_dom_create.py, or add
  10.281 -'auto_console=True' to the defaults file. This will cause
  10.282 -xc_dom_create.py to automatically become the console terminal after
  10.283 -starting the domain.
  10.284 -
  10.285 -Boot-time output will be directed to this console by default, because
  10.286 -the console name is tty0. It is also possible to log in via the
  10.287 -virtual console --- once again, your normal startup scripts will work
  10.288 -as normal (e.g., by running mingetty on tty1-7).  The device node to
  10.289 -which the virtual console is attached can be configured by specifying
  10.290 -'xencons=' on the OS command line: 
  10.291 - 'xencons=off' --> disable virtual console
  10.292 - 'xencons=tty' --> attach console to /dev/tty1 (tty0 at boot-time)
  10.293 - 'xencons=ttyS' --> attach console to /dev/ttyS0
  10.294 -
  10.295 -
  10.296 -Manage Running Domains
  10.297 -==============================
  10.298 -
  10.299 -You can see a list of existing domains with:
  10.300 -# xc_dom_control.py list
  10.301 -
  10.302 -In order to stop a domain, you use:
  10.303 -# xc_dom_control.py stop <domain_id>
  10.304 -
  10.305 -To shutdown a domain cleanly use:
  10.306 -# xc_dom_control.py shutdown <domain_id>
  10.307 -
  10.308 -To destroy a domain immediately:
  10.309 -# xc_dom_control.py destroy <domain_id>
  10.310 -
  10.311 -There are other more advanced options, including pinning domains to
  10.312 -specific CPUs and saving / resuming domains to / from disk files.  To
  10.313 -get more information, run the tool without any arguments:
  10.314 -# xc_dom_control.py
  10.315 -
  10.316 -There is more information available in the Xen README files, the
  10.317 -VBD-HOWTO and the contributed FAQ / HOWTO documents on the web.
  10.318 -
  10.319 -
  10.320 -Other Control Tasks using Python
  10.321 -================================
  10.322 -
  10.323 -A Python module 'Xc' is installed as part of the tools-install
  10.324 -process. This can be imported, and an 'xc object' instantiated, to
  10.325 -provide access to privileged command operations:
  10.326 -
  10.327 -# import Xc
  10.328 -# xc = Xc.new()
  10.329 -# dir(xc)
  10.330 -# help(xc.domain_create)
  10.331 -
  10.332 -In this way you can see that the class 'xc' contains useful
  10.333 -documentation for you to consult.
  10.334 -
  10.335 -A further package of useful routines (xenctl) is also installed:
  10.336 -
  10.337 -# import xenctl.utils
  10.338 -# help(xenctl.utils)
  10.339 -
  10.340 -You can use these modules to write your own custom scripts or you can
  10.341 -customise the scripts supplied in the Xen distribution.
  10.342 -
  10.343 -
  10.344 -Automatically start / stop domains at boot / shutdown
  10.345 -=====================================================
  10.346 -
  10.347 -A Sys-V style init script for RedHat systems is provided in
  10.348 -tools/examples/xendomains.  When you run 'make install' in that
  10.349 -directory, it should be automatically copied to /etc/init.d/.  You can
  10.350 -then enable it using the chkconfig command, e.g.:
  10.351 -
  10.352 -# chkconfig --add xendomains
  10.353 -
  10.354 -By default, this will start the boot-time domains in runlevels 3, 4
  10.355 -and 5.  To specify a domain is to start at boot-time, place its
  10.356 -configuration file (or a link to it) under /etc/xc/auto/.
  10.357 -
  10.358 -The script will also stop ALL domains when the system is shut down,
  10.359 -even domains that it did not start originally.
  10.360 -
  10.361 -You can also use the "service" command (part of the RedHat standard
  10.362 -distribution) to run this script manually, e.g:
  10.363 -
  10.364 -# service xendomains start
  10.365 -
  10.366 -Starts all the domains with config files under /etc/xc/auto/.
  10.367 -
  10.368 -# service xendomains stop
  10.369 -
  10.370 -Shuts down ALL running Xen domains.
    11.1 --- a/docs/pdb.txt	Wed Jun 09 12:41:57 2004 +0000
    11.2 +++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
    11.3 @@ -1,273 +0,0 @@
    11.4 -Pervasive Debugging 
    11.5 -===================
    11.6 -
    11.7 -Alex Ho (alex.ho at cl.cam.ac.uk)
    11.8 -
    11.9 -Introduction
   11.10 -------------
   11.11 -
   11.12 -The pervasive debugging project is leveraging Xen to 
   11.13 -debug distributed systems.  We have added a gdb stub
   11.14 -to Xen to allow for remote debugging of both Xen and
   11.15 -guest operating systems.  More information about the
   11.16 -pervasive debugger is available at: http://www.cl.cam.ac.uk/netos/pdb
   11.17 -
   11.18 -
   11.19 -Implementation
   11.20 ---------------
   11.21 -
   11.22 -The gdb stub communicates with gdb running over a serial line.
   11.23 -The main entry point is pdb_handle_exception() which is invoked
   11.24 -from:    pdb_key_pressed()    ('D' on the console)
   11.25 -         do_int3_exception()  (interrupt 3: breakpoint exception)
   11.26 -         do_debug()           (interrupt 1: debug exception)
   11.27 -
   11.28 -This accepts characters from the serial port and passes gdb
   11.29 -commands to pdb_process_command() which implements the gdb stub
   11.30 -interface.  This file draws heavily from the kgdb project and
   11.31 -sample gdbstub provided with gdb.
   11.32 -
   11.33 -The stub can examine registers, single step and continue, and
   11.34 -read and write memory (in Xen, a domain, or a Linux process'
   11.35 -address space).  The debugger does not currently trace the 
   11.36 -current process, so all bets are off if context switch occurs
   11.37 -in the domain.
   11.38 -
   11.39 -
   11.40 -Setup
   11.41 ------
   11.42 -
   11.43 - +-------+ telnet +-----------+ serial +-------+ 
   11.44 - |  GDB  |--------|  nsplitd  |--------|  Xen  |
   11.45 - +-------+        +-----------+        +-------+ 
   11.46 -
   11.47 -To run pdb, Xen must be appropriately configured and 
   11.48 -a suitable serial interface attached to the target machine.
   11.49 -GDB and nsplitd can run on the same machine.
   11.50 -
   11.51 -Xen Configuration
   11.52 -
   11.53 -  Add the "pdb=xxx" option to your Xen boot command line
   11.54 -  where xxx is one of the following values:
   11.55 -     com1    gdb stub should communicate on com1
   11.56 -     com1H   gdb stub should communicate on com1 (with high bit set)
   11.57 -     com2    gdb stub should communicate on com2
   11.58 -     com2H   gdb stub should communicate on com2 (with high bit set)
   11.59 -
   11.60 -  Symbolic debugging infomration is quite helpful too:
   11.61 -  xeno.bk/xen/arch/i386/Rules.mk
   11.62 -    add -g to CFLAGS to compile Xen with symbols
   11.63 -  xeno.bk/xenolinux-2.4.24-sparse/arch/xen/Makefile
   11.64 -    add -g to CFLAGS to compile Linux with symbols
   11.65 -
   11.66 -  You may also want to consider dedicating a register to the
   11.67 -  frame pointer (disable the -fomit-frame-pointer compile flag).
   11.68 -
   11.69 -  When booting Xen and domain 0, look for the console text 
   11.70 -  "Initializing pervasive debugger (PDB)" just before DOM0 starts up.
   11.71 -
   11.72 -Serial Port Configuration
   11.73 -
   11.74 -  pdb expects to communicate with gdb using the serial port.  Since 
   11.75 -  this port is often shared with the machine's console output, pdb can
   11.76 -  discriminate its communication by setting the high bit of each byte.
   11.77 -
   11.78 -  A new tool has been added to the source tree which splits 
   11.79 -  the serial output from a remote machine into two streams: 
   11.80 -  one stream (without the high bit) is the console and 
   11.81 -  one stream (with the high bit stripped) is the pdb communication.
   11.82 -
   11.83 -  See:  xeno.bk/tools/nsplitd
   11.84 -
   11.85 -  nsplitd configuration
   11.86 -  ---------------------
   11.87 -  hostname$ more /etc/xinetd.d/nsplit
   11.88 -  service nsplit1
   11.89 -  {
   11.90 -        socket_type             = stream
   11.91 -        protocol                = tcp
   11.92 -        wait                    = no
   11.93 -        user                    = wanda
   11.94 -        server                  = /usr/sbin/in.nsplitd
   11.95 -        server_args             = serial.cl.cam.ac.uk:wcons00
   11.96 -        disable                 = no
   11.97 -        only_from               = 128.232.0.0/17 127.0.0.1
   11.98 -  }
   11.99 -
  11.100 -  hostname$ egrep 'wcons00|nsplit1' /etc/services
  11.101 -  wcons00         9600/tcp        # Wanda remote console
  11.102 -  nsplit1         12010/tcp       # Nemesis console splitter ports.
  11.103 -
  11.104 -  Note: nsplitd was originally written for the Nemesis project
  11.105 -  at Cambridge.
  11.106 -
  11.107 -  After nsplitd accepts a connection on <port> (12010 in the above
  11.108 -  example), it starts listening on port <port + 1>.  Characters sent 
  11.109 -  to the <port + 1> will have the high bit set and vice versa for 
  11.110 -  characters received.
  11.111 -
  11.112 -  You can connect to the nsplitd using
  11.113 -  'tools/xenctl/lib/console_client.py <host> <port>'
  11.114 -
  11.115 -GDB 6.0
  11.116 -  pdb has been tested with gdb 6.0.  It should also work with
  11.117 -  earlier versions.
  11.118 -
  11.119 -
  11.120 -Usage
  11.121 ------
  11.122 -
  11.123 -1. Boot Xen and Linux
  11.124 -2. Interrupt Xen by pressing 'D' at the console
  11.125 -   You should see the console message: 
  11.126 -   (XEN) pdb_handle_exception [0x88][0x101000:0xfc5e72ac]
  11.127 -   At this point Xen is frozen and the pdb stub is waiting for gdb commands 
  11.128 -   on the serial line.
  11.129 -3. Attach with gdb
  11.130 -   (gdb) file xeno.bk/xen/xen
  11.131 -   Reading symbols from xeno.bk/xen/xen...done.
  11.132 -   (gdb) target remote <hostname>:<port + 1>              /* contact nsplitd */
  11.133 -   Remote debugging using serial.srg:12131
  11.134 -   continue_cpu_idle_loop () at current.h:10
  11.135 -   warning: shared library handler failed to enable breakpoint
  11.136 -   (gdb) break __enter_scheduler
  11.137 -   Breakpoint 1 at 0xfc510a94: file schedule.c, line 330.
  11.138 -   (gdb) cont
  11.139 -   Continuing.
  11.140 -
  11.141 -   Program received signal SIGTRAP, Trace/breakpoint trap.
  11.142 -   __enter_scheduler () at schedule.c:330
  11.143 -   (gdb) step
  11.144 -   (gdb) step
  11.145 -   (gdb) print next            /* the variable prev has been optimized away! */
  11.146 -   $1 = (struct task_struct *) 0x0
  11.147 -   (gdb) delete
  11.148 -   Delete all breakpoints? (y or n) y
  11.149 -4. You can add additional symbols to gdb
  11.150 -   (gdb) add-sym xenolinux-2.4.24/vmlinux
  11.151 -   add symbol table from file "xenolinux-2.4.24/vmlinux" at
  11.152 -   (y or n) y
  11.153 -   Reading symbols from xenolinux-2.4.24/vmlinux...done.
  11.154 -   (gdb) x/s cpu_vendor_names[0]
  11.155 -   0xc01530d2 <cpdext+62898>:	 "Intel"
  11.156 -   (gdb) break free_uid
  11.157 -   Breakpoint 2 at 0xc0012250
  11.158 -   (gdb) cont
  11.159 -   Continuing.                                  /* run a command in domain 0 */
  11.160 -
  11.161 -   Program received signal SIGTRAP, Trace/breakpoint trap.
  11.162 -   free_uid (up=0xbffff738) at user.c:77
  11.163 -
  11.164 -   (gdb) print *up
  11.165 -   $2 = {__count = {counter = 0}, processes = {counter = 135190120}, files = {
  11.166 -       counter = 0}, next = 0x395, pprev = 0xbffff878, uid = 134701041}
  11.167 -   (gdb) finish
  11.168 -   Run till exit from #0  free_uid (up=0xbffff738) at user.c:77
  11.169 -
  11.170 -   Program received signal SIGTRAP, Trace/breakpoint trap.
  11.171 -   release_task (p=0xc2da0000) at exit.c:51
  11.172 -   (gdb) print *p
  11.173 -   $3 = {state = 4, flags = 4, sigpending = 0, addr_limit = {seg = 3221225472},
  11.174 -     exec_domain = 0xc016a040, need_resched = 0, ptrace = 0, lock_depth = -1, 
  11.175 -     counter = 1, nice = 0, policy = 0, mm = 0x0, processor = 0, 
  11.176 -     cpus_runnable = 1, cpus_allowed = 4294967295, run_list = {next = 0x0, 
  11.177 -       prev = 0x0}, sleep_time = 18995, next_task = 0xc017c000, 
  11.178 -     prev_task = 0xc2f94000, active_mm = 0x0, local_pages = {next = 0xc2da0054,
  11.179 -       prev = 0xc2da0054}, allocation_order = 0, nr_local_pages = 0, 
  11.180 -     ...
  11.181 -5. To resume Xen, enter the "continue" command to gdb.
  11.182 -   This sends the packet $c#63 along the serial channel.
  11.183 -
  11.184 -   (gdb) cont
  11.185 -   Continuing.
  11.186 -
  11.187 -Debugging Multiple Domains & Processes
  11.188 ---------------------------------------
  11.189 -
  11.190 -pdb supports debugging multiple domains & processes.  You can switch
  11.191 -between different domains and processes within domains and examine
  11.192 -variables in each.
  11.193 -
  11.194 -The pdb context identifies the current debug target.  It is stored
  11.195 -in the xen variable pdb_ctx and defaults to xen.
  11.196 -
  11.197 -   target    pdb_ctx.domain    pdb_ctx.process
  11.198 -   ------    --------------    ---------------
  11.199 -    xen           -1                 -1
  11.200 -  guest os      0,1,2,...            -1
  11.201 -   process      0,1,2,...          0,1,2,...
  11.202 -
  11.203 -Unfortunately, gdb doesn't understand debugging multiple process
  11.204 -simultaneously (we're working on it), so at present you are limited 
  11.205 -to just one set of symbols for symbolic debugging.  When debugging
  11.206 -processes, pdb currently supports just Linux 2.4.
  11.207 -
  11.208 -   define setup
  11.209 -      file xeno-clone/xeno.bk/xen/xen
  11.210 -      add-sym xeno-clone/xenolinux-2.4.25/vmlinux
  11.211 -      add-sym ~ach61/a.out
  11.212 -   end
  11.213 -
  11.214 -
  11.215 -1. Connect with gdb as before.  A couple of Linux-specific 
  11.216 -   symbols need to be defined.
  11.217 -
  11.218 -   (gdb) target remote <hostname>:<port + 1>              /* contact nsplitd */
  11.219 -   Remote debugging using serial.srg:12131
  11.220 -   continue_cpu_idle_loop () at current.h:10
  11.221 -   warning: shared library handler failed to enable breakpoint
  11.222 -   (gdb) set pdb_pidhash_addr = &pidhash
  11.223 -   (gdb) set pdb_init_task_union_addr = &init_task_union
  11.224 -
  11.225 -2. The pdb context defaults to Xen and we can read Xen's memory.
  11.226 -   An attempt to access domain 0 memory fails.
  11.227 -  
  11.228 -   (gdb) print pdb_ctx
  11.229 -   $1 = {valid = 0, domain = -1, process = -1, ptbr = 1052672}
  11.230 -   (gdb) print hexchars
  11.231 -   $2 = "0123456789abcdef"
  11.232 -   (gdb) print cpu_vendor_names
  11.233 -   Cannot access memory at address 0xc0191f80
  11.234 -
  11.235 -3. Now we change to domain 0.  In addition to changing pdb_ctx.domain,
  11.236 -   we need to change pdb_ctx.valid to signal pdb of the change.
  11.237 -   It is now possible to examine Xen and Linux memory.
  11.238 -
  11.239 -   (gdb) set pdb_ctx.domain=0
  11.240 -   (gdb) set pdb_ctx.valid=1
  11.241 -   (gdb) print hexchars
  11.242 -   $3 = "0123456789abcdef"
  11.243 -   (gdb) print cpu_vendor_names
  11.244 -   $4 = {0xc0158b46 "Intel", 0xc0158c37 "Cyrix", 0xc0158b55 "AMD", 
  11.245 -     0xc0158c3d "UMC", 0xc0158c41 "NexGen", 0xc0158c48 "Centaur", 
  11.246 -     0xc0158c50 "Rise", 0xc0158c55 "Transmeta"}
  11.247 -
  11.248 -4. Now change to a process within domain 0.  Again, we need to
  11.249 -   change pdb_ctx.valid in addition to pdb_ctx.process.
  11.250 -
  11.251 -   (gdb) set pdb_ctx.process=962
  11.252 -   (gdb) set pdb_ctx.valid =1
  11.253 -   (gdb) print pdb_ctx
  11.254 -   $1 = {valid = 0, domain = 0, process = 962, ptbr = 52998144}
  11.255 -   (gdb) print aho_a
  11.256 -   $2 = 20
  11.257 -
  11.258 -5. Now we can read the same variable from another process running
  11.259 -   the same executable in another domain.
  11.260 -
  11.261 -   (gdb) set pdb_ctx.domain=1
  11.262 -   (gdb) set pdb_ctx.process=1210
  11.263 -   (gdb) set pdb_ctx.valid=1
  11.264 -   (gdb) print pdb_ctx
  11.265 -   $3 = {valid = 0, domain = 1, process = 1210, ptbr = 70574080}
  11.266 -   (gdb) print aho_a
  11.267 -   $4 = 27
  11.268 -
  11.269 -
  11.270 -
  11.271 -
  11.272 -Changes
  11.273 --------
  11.274 -
  11.275 -04.02.05 aho creation
  11.276 -04.03.31 aho add description on debugging multiple domains