ia64/xen-unstable

changeset 736:afe5451d9f97

bitkeeper revision 1.437 (3f69d8adjFeOpChvZoY4yoiFD1epWA)

new README's and "documentation".
author iap10@labyrinth.cl.cam.ac.uk
date Thu Sep 18 16:09:17 2003 +0000 (2003-09-18)
parents a3017cd62e5d
children 21edd206e08a
files .rootkeys README README.CD TODO
line diff
     1.1 --- a/.rootkeys	Thu Sep 18 13:27:45 2003 +0000
     1.2 +++ b/.rootkeys	Thu Sep 18 16:09:17 2003 +0000
     1.3 @@ -5,6 +5,7 @@ 3ddb79c9_hgSp-gsQm8HqWM_9W3B_A BitKeeper
     1.4  3eb788d6Kleck_Cut0ouGneviGzliQ Makefile
     1.5  3f5ef5a24IaQasQE2tyMxrfxskMmvw README
     1.6  3f5ef5a2l4kfBYSQTUaOyyD76WROZQ README.CD
     1.7 +3f69d8abYB1vMyD_QVDvzxy5Zscf1A TODO
     1.8  3e6377b24eQqYMsDi9XrFkIgTzZ47A tools/balloon/Makefile
     1.9  3e6377d6eiFjF1hHIS6JEIOFk62xSA tools/balloon/README
    1.10  3e6377dbGcgnisKw16DPCaND7oGO3Q tools/balloon/balloon.c
     2.1 --- a/README	Thu Sep 18 13:27:45 2003 +0000
     2.2 +++ b/README	Thu Sep 18 16:09:17 2003 +0000
     2.3 @@ -59,26 +59,27 @@ on Xen: Linux 2.4, Windows XP, and NetBS
     2.4  
     2.5  The Linux 2.4 port (currently Linux 2.4.22) works very well -- we
     2.6  regularly use it to host complex applications such as PostgreSQL,
     2.7 -Apache, BK servers etc. It runs all applications we've tried.  We
     2.8 -refer to our version of Linux ported to run on Xen as "XenoLinux",
     2.9 -through really it's just standard Linux ported to a new virtual CPU
    2.10 -architecture that we call xeno-x86 (abbreviated to just "xeno").
    2.11 +Apache, BK servers etc. It runs all user-space applications we've
    2.12 +tried.  We refer to our version of Linux ported to run on Xen as
    2.13 +"XenoLinux", through really it's just standard Linux ported to a new
    2.14 +virtual CPU architecture that we call xeno-x86 (abbreviated to just
    2.15 +"xeno").
    2.16  
    2.17  Unfortunately, the NetBSD port has stalled due to lack of man
    2.18  power. We believe most of the hard stuff has already been done, and
    2.19  are hoping to get the ball rolling again soon. In hindsight, a FreeBSD
    2.20 -4 port might have been more useful to the community. 
    2.21 +4 port might have been more useful to the community. Any volunteers? :-)
    2.22  
    2.23  The Windows XP port is nearly finished. It's running user space
    2.24  applications and is generally in pretty good shape thanks to some hard
    2.25  work by the team over the summer.  Of course, there are issues with
    2.26  releasing this code to others.  We should be able to release the
    2.27 -source and binaries to anyone else that's signed the Microsoft
    2.28 -academic source license, which these days has very reasonable
    2.29 -terms. We are in discussions with Microsoft about the possibility of
    2.30 -being able to make binary releases to a larger user
    2.31 -community. Obviously, there are issues with product activation in this
    2.32 -environment and such like, which need to be thought through.
    2.33 +source and binaries to anyone that has signed the Microsoft academic
    2.34 +source license, which these days has very reasonable terms. We are in
    2.35 +discussions with Microsoft about the possibility of being able to make
    2.36 +binary releases to a larger user community. Obviously, there are
    2.37 +issues with product activation in this environment and such like,
    2.38 +which need to be thought through.
    2.39  
    2.40  So, for the moment, you only get to run multiple copies of Linux on
    2.41  Xen, but we hope this will change before too long.  Even running
    2.42 @@ -96,85 +97,6 @@ We've successfully booted over 128 copie
    2.43  (a dual CPU hyperthreaded Xeon box) but we imagine that it would be
    2.44  more normal to use some smaller number, perhaps 10-20.
    2.45  
    2.46 -Known limitations and work in progress
    2.47 -======================================
    2.48 -
    2.49 -The "xenctl" tool is still rather clunky and not very user
    2.50 -friendly. In particular, it should have an option to create and start
    2.51 -a domain with all the necessary parameters set from a named xml file.
    2.52 -
    2.53 -The java xenctl tool is really just a frontend for a bunch of C tools
    2.54 -named xi_* that do the actual work of talking to Xen and setting stuff
    2.55 -up. Some local users prefer to drive the xi_ tools directly, typically
    2.56 -from simple shell scripts. These tools are even less user friendly
    2.57 -than xenctl but its arguably clearer what's going on.
    2.58 -
    2.59 -There's also a web based interface for controlling domains that uses
    2.60 -apache/tomcat, but it has fallen out of sync with respect to the
    2.61 -underlying tools, so doesn't always work as expected and needs to be
    2.62 -fixed.
    2.63 -
    2.64 -The current Virtual Firewall Router (VFR) implementation in the
    2.65 -snapshot tree is very rudimentary, and in particular, lacks the IP
    2.66 -port-space sharing across domains that we've proposed that promises to
    2.67 -provide a better alternative to NAT.  There's a complete new
    2.68 -implementation under development which also supports much better
    2.69 -logging and auditing support.  The current network scheduler is just
    2.70 -simple round-robin between domains, without any rate limiting or rate
    2.71 -guarantees. Dropping in a new scheduler should be straightforward, and
    2.72 -is planned as part of the VFRv2 work package.
    2.73 -
    2.74 -Another area that needs further work is the interface between Xen and
    2.75 -domain0 user space where the various XenoServer control daemons run.
    2.76 -The current interface is somewhat ad-hoc, making use of various
    2.77 -/proc/xeno entries that take a random assortment of arguments. We
    2.78 -intend to reimplement this to provide a consistent means of feeding
    2.79 -back accounting and logging information to the control daemon.
    2.80 -
    2.81 -There's also a number of memory management hacks that didn't make this
    2.82 -release: We have plans for a "universal buffer cache" that enables
    2.83 -otherwise unused system memory to be used by domains in a read-only
    2.84 -fashion. We also have plans for inter-domain shared-memory to enable
    2.85 -high-performance bulk transport for cases where the usual internal
    2.86 -networking performance isn't good enough (e.g. communication with a
    2.87 -internal file server on another domain).
    2.88 -
    2.89 -We also have plans to implement domain suspend/resume-to-file. This is
    2.90 -basically an extension to the current domain building process to
    2.91 -enable domain0 to read out all of the domain's state and store it in a
    2.92 -file. There are complications here due to Xen's para-virtualised
    2.93 -design, whereby since the physical machine memory pages available to
    2.94 -the guest OS are likely to be different when the OS is resumed, we
    2.95 -need to re-write the page tables appropriately. 
    2.96 -
    2.97 -We have the equivalent of balloon driver functionality to control
    2.98 -domain's memory usage, enabling a domain to give back unused pages to
    2.99 -Xen. This needs properly documenting, and perhaps a way of domain0
   2.100 -signalling to a domain that it requires it to reduce its memory
   2.101 -footprint, rather than just the domain volunteering.
   2.102 -
   2.103 -The current disk scheduler is rather simplistic (batch round robin),
   2.104 -and could be replaced by e.g. Cello if we have QoS isolation
   2.105 -problems. For most things it seems to work OK, but there's currently
   2.106 -no service differentiation or weighting.
   2.107 -
   2.108 -Currently, although Xen runs on SMP and SMT (hyperthreaded) machines,
   2.109 -the scheduling is far from smart -- domains are currently statically
   2.110 -assigned to a CPU when they are created (in a round robin fashion).
   2.111 -The scheduler needs to be modified such that before going idle a
   2.112 -logical CPU looks for work on other run queues (particularly on the
   2.113 -same physical CPU). 
   2.114 -
   2.115 -Xen currently only supports uniprocessor guest OSes. We have designed
   2.116 -the Xen interface with MP guests in mind, and plan to build an MP
   2.117 -Linux guest in due course. Basically, an MP guest would consist of
   2.118 -multiple scheduling domains (one per CPU) sharing a single memory
   2.119 -protection domain. The only extra complexity for the Xen VM system is
   2.120 -ensuring that when a page transitions from holding a page table or
   2.121 -page directory to a write-able page, we must ensure that no other CPU
   2.122 -still has the page in its TLB to ensure memory system integrity.  One
   2.123 -other issue for supporting MP guests is that we'll need some sort of
   2.124 -CPU gang scheduler, which will require some research.
   2.125  
   2.126  
   2.127  Hardware support
   2.128 @@ -208,4 +130,6 @@ not recommended.
   2.129  
   2.130  
   2.131  Ian Pratt
   2.132 -9 Sep 2003
   2.133 \ No newline at end of file
   2.134 +9 Sep 2003
   2.135 +
   2.136 +
     3.1 --- a/README.CD	Thu Sep 18 13:27:45 2003 +0000
     3.2 +++ b/README.CD	Thu Sep 18 16:09:17 2003 +0000
     3.3 @@ -9,7 +9,7 @@
     3.4  
     3.5   XenDemoCD 1.0 rc1 
     3.6   University of Cambridge Computer Laboratory
     3.7 - 31 Aug 2003
     3.8 + 18 Sep 2003
     3.9  
    3.10   http://www.cl.cam.ac.uk/netos/xen
    3.11  
    3.12 @@ -49,37 +49,35 @@ configuration to do this), hit a key on 
    3.13  line to pull up the Grub boot menu, then select one of the four boot
    3.14  options:
    3.15  
    3.16 - Xen / linux-2.4.22 X using DHCP
    3.17 - Xen / linux-2.4.22 X using cmdline IP config
    3.18 - Xen / linux-2.4.22 text using DHCP
    3.19 - Xen / linux-2.4.22 text using cmdline IP config
    3.20 + Xen / linux-2.4.22 
    3.21 + Xen / linux-2.4.22 using cmdline IP configuration
    3.22   linux-2.4.22
    3.23 - linux-2.4.22 single
    3.24 - linux-2.4.20-rc1 single
    3.25  
    3.26 -The last three options are plain linux kernels that run on the bare
    3.27 -machine, and are included simply to help diagnose driver compatibility
    3.28 +The last option is a plain linux kernel that runs on the bare machine,
    3.29 +and is included simply to help diagnose driver compatibility
    3.30  problems. If you are going for a command line IP config, hit "e" at
    3.31  the grub menu, then edit the "ip=" parameters to reflect your setup
    3.32  e.g. "ip=<ipaddr>::<gateway>:<netmask>::eth0:off". It shouldn't be
    3.33  necessary to set either the nfs server or hostname
    3.34  parameters. Alternatively, once xenolinux has booted you can login and
    3.35 -setup networking with ifconfig and route in the normal way.
    3.36 +setup networking with 'dhclient' or 'ifconfig' and 'route' in the
    3.37 +normal way.
    3.38  
    3.39  To make things easier for yourself, its worth trying to arrange for an
    3.40  IP address which is the first in a sequential range of free IP
    3.41 -addresses.  Its useful to give each VM instance its own IP address
    3.42 -(though it is possible to do NAT or use private addresses etc), and
    3.43 -the configuration files on the CD allocate IP addresses sequentially
    3.44 -for subsequent domains unless told otherwise.
    3.45 +addresses.  Its useful to give each VM instance its own public IP
    3.46 +address (though it is possible to do NAT or use private addresses
    3.47 +etc), and the configuration files on the CD allocate IP addresses
    3.48 +sequentially for subsequent domains unless told otherwise.
    3.49  
    3.50  After selecting the kernel to boot, stand back and watch Xen boot,
    3.51  closely followed by "domain 0" running the xenolinux kernel. The boot
    3.52  messages are also sent to the serial line (the baud rate can be set on
    3.53 -the Xen cmdline), which can be very useful for debugging should
    3.54 -anything important scroll off the screen. Xen's startup messages will
    3.55 -look quite familiar as much of the hardware initialisation (SMP boot,
    3.56 -apic setup) and device drivers are derived from Linux.
    3.57 +the Xen cmdline, but defaults to 115200), which can be very useful for
    3.58 +debugging should anything important scroll off the screen. Xen's
    3.59 +startup messages will look quite familiar as much of the hardware
    3.60 +initialisation (SMP boot, apic setup) and device drivers are derived
    3.61 +from Linux.
    3.62  
    3.63  If everything is well, you should see the linux rc scripts start a
    3.64  bunch of standard services including sshd.  Login on the console or
    3.65 @@ -88,19 +86,26 @@ via ssh::
    3.66   password: xendemo      xendemo
    3.67  
    3.68  Once logged in, it should look just like any regular linux box. All
    3.69 -the usual tools and commands should work as per usual.  You can start
    3.70 -an xserver with 'startx' if you elected not to start one at boot.  The
    3.71 -current rc scripts also starts an Apache web server, which you should
    3.72 -be able to issue requests to on port 80.  If you want to browse the
    3.73 -Xen / Xenolinux source, it's all located under /local, complete with
    3.74 -BitKeeper repository.
    3.75 +the usual tools and commands should work as per usual.  It's probably
    3.76 +best to start by configuring networking, either with 'dhclient' or
    3.77 +manually via ifconfig and route, remembering to edit /etc/resolv.conf
    3.78 +if you want DNS.
    3.79  
    3.80 -Because CD's aren't exactly known for their high performance, the
    3.81 -machine will likely feel rather sluggish. You may wish to go ahead and
    3.82 -install Xen/XenoLinux on your hard drive, either dropping Xen and the
    3.83 -XenoLinux kernel down onto a pre-existing Linux distribution, or using
    3.84 -the file systems from the CD (which are based on RH7.2). See the
    3.85 -installation instructions later in this document.
    3.86 +You can start an xserver with 'startx'. It defaults to a conservative
    3.87 +1024x768, but you can edit the script for higher resoloutions.  The CD
    3.88 +contains a load of standard software. You should be able to start
    3.89 +Apache, PostgreSQL, Mozzila etc in the normal way, the because
    3.90 +everything is running off CD the performance will be very sluggish and
    3.91 +you may run out of memory for the 'tmpfs' file system.  You may wish
    3.92 +to go ahead and install Xen/XenoLinux on your hard drive, either
    3.93 +dropping Xen and the XenoLinux kernel down onto a pre-existing Linux
    3.94 +distribution, or using the file systems from the CD (which are based
    3.95 +on RH9). See the installation instructions later in this document.
    3.96 +
    3.97 +If you want to browse the Xen / Xenolinux source, it's all located
    3.98 +under /local/src, complete with BitKeeper repository. We've also
    3.99 +included source code and configuration information for the various
   3.100 +benchmarks we used in the SOSP paper.
   3.101  
   3.102  
   3.103  Starting other domains
   3.104 @@ -113,21 +118,81 @@ lives in /local/bin and uses /etc/xenctl
   3.105  configuration. Run 'xenctl' without any arguments to get a help
   3.106  message.
   3.107  
   3.108 -To create a new domain, using the same xenolinux kernel image as used
   3.109 -for domain0, the next consecutive IP address, and the same CD-based
   3.110 -file system, type:
   3.111 -
   3.112 - xenctl new -n give_this_domain_a_name
   3.113 +The first thing to do is to set up a window in which you will receive
   3.114 +console output from other domains. Console output will arrive as UDP
   3.115 +packets destined for 169.254.1.0, so its necessary to setup an alias
   3.116 +on eth0. The easiest way to do this is to run:
   3.117  
   3.118 -domctl will return printing the domain id that has been allocated to
   3.119 -the new domain (probably '1' if this is the first domain to be fired
   3.120 -up). If you're running off the CD this will take a while, as there's
   3.121 -huge piles of Java goop grinding away...  Then, fire up the domain:
   3.122 +  xen_nat_enable
   3.123  
   3.124 - xenctl start -n<domid>                       
   3.125 +This also inserts a few NAT rules into "domain0", in case you'll be
   3.126 +starting other domains without their own IP addresses. Alternatively,
   3.127 +just do "ifconfig eth0:0 169.254.1.0 up". NB: The intention is that in
   3.128 +future Xen will do NAT itsel (actually RSIP), but this is part of a
   3.129 +larger work package that isn't stable enough to release.
   3.130  
   3.131 -You should see your domain boot and be able to ping and ssh into it as
   3.132 -before.
   3.133 +Next, run a the xen UDP console displayer:
   3.134 +
   3.135 +  xen_read_console &
   3.136 +
   3.137 +
   3.138 +The tool used for starting and controlling domains is 'xenctl'. It's a
   3.139 +java java front end to various underlying internal tools written in C
   3.140 +(xi_*). Running off CD, it seems to take an age to start...
   3.141 +
   3.142 +xenctl uses /etc/xenctl.xml as its default configuration. The
   3.143 +directory contains two different configs depending on whether you want
   3.144 +to use NAT, or multiple sequential external IPs (its possible to
   3.145 +override any of the parameters on the command line, if you want to set
   3.146 +specific IPs etc).
   3.147 +
   3.148 +The default file supports NAT. To change to use multiple IPs:
   3.149 + cp /etc/xenctl.xml-publicip /etc/xenctl.xml
   3.150 +
   3.151 +A sequence of commands must be given to xenctl to start a
   3.152 +domain. First a new domain must be created, which requires specifying
   3.153 +the initial memory allocation, the kernel image to use, and the kernel
   3.154 +command line. As well as the root file system details, you'll need to
   3.155 +set the IP address on the command line: since Xen currently doesn't
   3.156 +support a virtual console for domains >1, you won't be able to log to
   3.157 +your new domain unless you've got networking configured and an sshd
   3.158 +running! (using dhcp for new domains should work too.)
   3.159 +
   3.160 +After creating the domain, xenctl must be used to grant the domain
   3.161 +access to other resources such as physical or virtual disk partions.
   3.162 +Then, the domain must be started.
   3.163 +
   3.164 +These commands can be entered manually, but for convenience, xenctl
   3.165 +will also read them from a script and infer which domain number you're
   3.166 +referring to (-nX). To use the sample script:
   3.167 +
   3.168 + xenctl script -f/etc/xen-mynewdom
   3.169 +
   3.170 +You should see the domain booting on your xen_read_console window.
   3.171 +
   3.172 +The xml defaults start another domain running off the CD, using a
   3.173 +separate ram based file system for mutable data in root (just like
   3.174 +domain 0). 
   3.175 +
   3.176 +The new domain is started with a '4' on the kernel command line to
   3.177 +tell 'init' to go to runlevel 4 rather than the default of 3. This is
   3.178 +done simply to suppress a bunch of harmless error messages that would
   3.179 +otherwise occur when the new (unprivileged) domain tried to access
   3.180 +physical hardware resources to try setting the hwclock, system font,
   3.181 +gpm etc.
   3.182 +
   3.183 +After it's booted, you should be able to ssh into your new domain. If
   3.184 +you went for a NATed address, from domain 0 you should be able to ssh
   3.185 +into '169.254.1.X' where X is the domain number.  If you ran the
   3.186 +xen_enable_nat script, a bunch of port redirects have been installed
   3.187 +to enable you to ssh in to other domains remotely.  To access the new
   3.188 +virtual machine remotely, use:
   3.189 +
   3.190 + ssh -p2201 root@IP.address.Of.Domain0  # use 2202 for domain 2 etc.
   3.191 +
   3.192 +If you configured the new domain with its own IP address, you should
   3.193 +be able to ssh into it directly.
   3.194 +
   3.195  
   3.196  "xenctl list" provides status information about running domains,
   3.197  though is currently only allowed to be run by domain 0. It accesses
   3.198 @@ -137,13 +202,8 @@ kill it nicely by sending a shutdown eve
   3.199  terminate, or blow the sucker away with extreme prejudice. 
   3.200  
   3.201  If you want to configure the new domain differently, type 'xenctl' to
   3.202 -get a list of arguments, e.g. use the "-4" option to set a diffrent
   3.203 -IPv4 address. If you haven't any spare IP addresses on your network,
   3.204 -you can configure other domains with link-local addresses
   3.205 -(169.254/16), but then you'll only be able to access domains other
   3.206 -than domain0 from within the machine (they won't be externally
   3.207 -routeable). To automate this, there's an /etc/xenctl-linklocal.xml
   3.208 -which you can copy in place of /etc/xenctl.xml
   3.209 +get a list of arguments, e.g. at the 'xenctl domain new' command line
   3.210 +use the "-4" option to set a diffrent IPv4 address. 
   3.211  
   3.212  xenctl can be used to set the new kernel's command line, and hence
   3.213  determine what it uses as a root file system etc. Although the default
   3.214 @@ -183,11 +243,12 @@ create". The virtual disk can then optio
   3.215  by a virtual block device associated with another domain, and even
   3.216  used as a boot device.
   3.217  
   3.218 -Both virtual disks and real partitions should only be shared domains
   3.219 -in a read-only fashion otherwise the linux kernels will obviously get
   3.220 -very confused if the file system structure changes underneath them!
   3.221 -If you want read-write sharing, export the directory to other domains
   3.222 -via NFS. 
   3.223 +Both virtual disks and real partitions should only be shared between
   3.224 +domains in a read-only fashion otherwise the linux kernels will
   3.225 +obviously get very confused if the file system structure changes
   3.226 +underneath them (having the same partition mounted rw twice is a sure
   3.227 +fire way to cause irreparable damage)!  If you want read-write
   3.228 +sharing, export the directory to other domains via NFS from domain0.
   3.229  
   3.230  
   3.231  About The Xen Demo CD
   3.232 @@ -203,7 +264,7 @@ bootloader.
   3.233  
   3.234  This is a bootable CD that loads Xen, and then a Linux 2.4.22 OS image
   3.235  ported to run on Xen. The CD contains a copy of a file system based on
   3.236 -the RedHat 7.2 distribution that is able to run directly off the CD
   3.237 +the RedHat 9 distribution that is able to run directly off the CD
   3.238  ("live ISO"), using a "tmpfs" RAM-based file system for root (/etc
   3.239  /var etc). Changes you make to the tmpfs will obviously not be
   3.240  persistent across reboots!
   3.241 @@ -221,18 +282,19 @@ various memory management enhancements t
   3.242  communication and sharing of memory pages between OSs. We'll release
   3.243  newer snapshots as required, in the form of a BitKeeper repository
   3.244  hosted on http://xen.bkbits.net (follow instructions from the project
   3.245 -home page).  We're obviously grateful to receive any
   3.246 -bug fixes or other code you can contribute.
   3.247 +home page).  We're obviously grateful to receive any bug fixes or
   3.248 +other code you can contribute. We suggest you join the
   3.249 +xen-devel@lists.sourceforge.net mailing list.
   3.250  
   3.251  
   3.252  Installing from the CD
   3.253  ----------------------
   3.254  
   3.255  If you're installing Xen/XenoLinux onto an existing linux file system
   3.256 -distribution, its typically necessary to copy the Xen VMM
   3.257 -(/boot/image.gz) and XenoLinux kernels (/boot/xenolinux.gz) then
   3.258 -modify the Grub config (/boot/grub/menu.lst or /boot/grub/grub.conf)
   3.259 -on the target system.
   3.260 +distribution, just copy the Xen VMM (/boot/image.gz) and XenoLinux
   3.261 +kernels (/boot/xenolinux.gz), then modify the Grub config
   3.262 +(/boot/grub/menu.lst or /boot/grub/grub.conf) on the target system.
   3.263 +It should work on pretty much any distribution.
   3.264  
   3.265  Xen is a "multiboot" standard boot image. Despite being a 'standard',
   3.266  few boot loaders actually support it. The only two we know of are
   3.267 @@ -240,10 +302,11 @@ Grub, and our modified version of linux 
   3.268  XenoBoot CD -- PlanetLab have adopted the same boot CD approach).
   3.269  
   3.270  If you need to install grub on your system, you can do so either by
   3.271 -building the Grub source tree /usr/local/grub-0.93-iso9660-splashimage
   3.272 -or by copying over all the files in /boot/grub and then running
   3.273 -/sbin/grub and following the usual grub documentation. You'll then
   3.274 -need to configure the Grub config file.
   3.275 +building the Grub source tree
   3.276 +/usr/local/src/grub-0.93-iso9660-splashimage or by copying over all
   3.277 +the files in /boot/grub and then running /sbin/grub and following the
   3.278 +usual grub documentation. You'll then need to edit the Grub
   3.279 +config file.
   3.280  
   3.281  A typical Grub menu option might look like:
   3.282  
   3.283 @@ -261,9 +324,7 @@ there are various options to select whic
   3.284  The second line specifies which xenolinux image to use, and the
   3.285  standard linux command line arguments to pass to the kernel. In this
   3.286  case, we're configuring the root partition and stating that it should
   3.287 -be mounted read-only (normal practise). If the file system isn't
   3.288 -configured for DHCP then we'd probably want to configure that on the
   3.289 -kernel command line too.
   3.290 +be mounted read-only (normal practice). 
   3.291  
   3.292  If we were booting with an initial ram disk (initrd), then this would
   3.293  require a second "module" line, with no arguments.
   3.294 @@ -295,8 +356,8 @@ good idea too.
   3.295  
   3.296  To install the usr file system, copy the file system from CD on /usr,
   3.297  though leaving out the "XenDemoCD" and "boot" directories:
   3.298 -  cd /usr && cp -a doc games include lib local root share tmp X11R6 bin dict etc html kerberos libexec man sbin src /mnt/usr/
   3.299 -  
   3.300 +  cd /usr && cp -a X11R6 etc java libexec root src bin dict kerberos local sbin tmp doc include lib man share /mnt/usr
   3.301 +
   3.302  If you intend to boot off these file systems (i.e. use them for
   3.303  domain0), then you probably want to copy the /usr/boot directory on
   3.304  the cd over the top of the current symlink to /boot on your root
   3.305 @@ -315,20 +376,20 @@ on the keyboard to get a list of support
   3.306  
   3.307  If you have a crash you'll likely get a crash dump containing and EIP
   3.308  (PC), which along with and 'objdump -d image' can be useful in
   3.309 -figuring out what's happened. 
   3.310 -
   3.311 +figuring out what's happened.  Debug a xenolinux image just as you
   3.312 +would any other Linux kernel.
   3.313  
   3.314  Description of how the XenDemoCD boots
   3.315  --------------------------------------
   3.316  
   3.317  1. Grub is used to load Xen, a xenolinux kernel, and an initrd (initial
   3.318 -ram disk). [The source of the version of Grub used is in /usr/local/]
   3.319 +ram disk). [The source of the version of Grub used is in /usr/local/src]
   3.320  
   3.321  2. the init=/linuxrc command line causes linux to execute /linuxrc in
   3.322  the initrd. 
   3.323  
   3.324  3. the /linuxrc file attempts to mount the CD by trying the likely
   3.325 -locations /dev/hd[abcd].  
   3.326 +locations : /dev/hd[abcd].  
   3.327  
   3.328  4. it then creates a 'tmpfs' file system and untars the
   3.329  'XenDemoCD/root.tar.gz' file into the tmpfs. This contains hopefully
   3.330 @@ -345,21 +406,25 @@ normally.
   3.331  Building your own version of the XenDemoCD
   3.332  ------------------------------------------
   3.333  
   3.334 -The filesystems on the CD are based heavily on Peter Anvin's
   3.335 -SuperRescue CD version 2.1.2, which takes its content from RedHat
   3.336 -7.2. Since Xen uses a "multiboot" image format, it was necessary to
   3.337 -change the bootloader from isolinux to Grub0.93 with Leonid
   3.338 -Lisovskiy's <lly@pisem.net> grub.0.93-iso9660.patch
   3.339 +The 'live ISO' version of RedHat isbased heavily on Peter Anvin's
   3.340 +SuperRescue CD version 2.1.2 and J. McDaniel's Plan-B:
   3.341 +
   3.342 + http://www.kernel.org/pub/dist/superrescue/v2/
   3.343 + http://projectplanb.org/
   3.344 +
   3.345 +Since Xen uses a "multiboot" image format, it was necessary to change
   3.346 +the bootloader from isolinux to Grub0.93 with Leonid Lisovskiy's
   3.347 +<lly@pisem.net> grub.0.93-iso9660.patch
   3.348  
   3.349  The Xen Demo CD contains all of the build scripts that were used to
   3.350 -create it, so its possible to 'unpack' the current iso, modifiy it,
   3.351 +create it, so it is possible to 'unpack' the current iso, modifiy it,
   3.352  then build a new iso. The procedure for doing so is as follows:
   3.353  
   3.354  First, mount either the CD, or the iso image of the CD:
   3.355   
   3.356    mount /dev/cdrom /mnt/cdrom 
   3.357  or:
   3.358 -  mount -o loop xendemo-1.0beta.iso  /mnt/cdrom
   3.359 +  mount -o loop xendemo-1.0.iso  /mnt/cdrom
   3.360  
   3.361  cd to the directory you want to 'unpack' the iso into then run the
   3.362  unpack script:
     4.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
     4.2 +++ b/TODO	Thu Sep 18 16:09:17 2003 +0000
     4.3 @@ -0,0 +1,84 @@
     4.4 +
     4.5 +
     4.6 +Known limitations and work in progress
     4.7 +======================================
     4.8 +
     4.9 +The "xenctl" tool used for controling domains is still rather clunky
    4.10 +and not very user friendly. In particular, it should have an option to
    4.11 +create and start a domain with all the necessary parameters set from a
    4.12 +named xml file.
    4.13 +
    4.14 +The java xenctl tool is really just a frontend for a bunch of C tools
    4.15 +named xi_* that do the actual work of talking to Xen and setting stuff
    4.16 +up. Some local users prefer to drive the xi_ tools directly, typically
    4.17 +from simple shell scripts. These tools are even less user friendly
    4.18 +than xenctl but its arguably clearer what's going on.
    4.19 +
    4.20 +There's also a nice web based interface for controlling domains that
    4.21 +uses apache/tomcat. Unfortunately, this has fallen out of sync with
    4.22 +respect to the underlying tools, so is currently not built by default
    4.23 +and needs fixing. 
    4.24 +
    4.25 +The current Virtual Firewall Router (VFR) implementation in the
    4.26 +snapshot tree is very rudimentary, and in particular, lacks the IP
    4.27 +port-space sharing across domains that we've proposed that promises to
    4.28 +provide a better alternative to NAT.  There's a complete new
    4.29 +implementation under development which also supports much better
    4.30 +logging and auditing support.  The current network scheduler is just
    4.31 +simple round-robin between domains, without any rate limiting or rate
    4.32 +guarantees. Dropping in a new scheduler should be straightforward, and
    4.33 +is planned as part of the VFRv2 work package.
    4.34 +
    4.35 +Another area that needs further work is the interface between Xen and
    4.36 +domain0 user space where the various XenoServer control daemons run.
    4.37 +The current interface is somewhat ad-hoc, making use of various
    4.38 +/proc/xeno entries that take a random assortment of arguments. We
    4.39 +intend to reimplement this to provide a consistent means of feeding
    4.40 +back accounting and logging information to the control daemon. Also,
    4.41 +we should provide all domains with a read/write virtual console
    4.42 +interface -- currently for domains >1 it is output only.
    4.43 +
    4.44 +There's also a number of memory management hacks that didn't make this
    4.45 +release: We have plans for a "universal buffer cache" that enables
    4.46 +otherwise unused system memory to be used by domains in a read-only
    4.47 +fashion. We also have plans for inter-domain shared-memory to enable
    4.48 +high-performance bulk transport for cases where the usual internal
    4.49 +networking performance isn't good enough (e.g. communication with a
    4.50 +internal file server on another domain).
    4.51 +
    4.52 +We also have plans to implement domain suspend/resume-to-file. This is
    4.53 +basically an extension to the current domain building process to
    4.54 +enable domain0 to read out all of the domain's state and store it in a
    4.55 +file. There are complications here due to Xen's para-virtualised
    4.56 +design, whereby since the physical machine memory pages available to
    4.57 +the guest OS are likely to be different when the OS is resumed, we
    4.58 +need to re-write the page tables appropriately. 
    4.59 +
    4.60 +We have the equivalent of balloon driver functionality to control
    4.61 +domain's memory usage, enabling a domain to give back unused pages to
    4.62 +Xen. This needs properly documenting, and perhaps a way of domain0
    4.63 +signalling to a domain that it requires it to reduce its memory
    4.64 +footprint, rather than just the domain volunteering.
    4.65 +
    4.66 +The current disk scheduler is rather simplistic (batch round robin),
    4.67 +and could be replaced by e.g. Cello if we have QoS isolation
    4.68 +problems. For most things it seems to work OK, but there's currently
    4.69 +no service differentiation or weighting.
    4.70 +
    4.71 +Currently, although Xen runs on SMP and SMT (hyperthreaded) machines,
    4.72 +the scheduling is far from smart -- domains are currently statically
    4.73 +assigned to a CPU when they are created (in a round robin fashion).
    4.74 +The scheduler needs to be modified such that before going idle a
    4.75 +logical CPU looks for work on other run queues (particularly on the
    4.76 +same physical CPU). 
    4.77 +
    4.78 +Xen currently only supports uniprocessor guest OSes. We have designed
    4.79 +the Xen interface with MP guests in mind, and plan to build an MP
    4.80 +Linux guest in due course. Basically, an MP guest would consist of
    4.81 +multiple scheduling domains (one per CPU) sharing a single memory
    4.82 +protection domain. The only extra complexity for the Xen VM system is
    4.83 +ensuring that when a page transitions from holding a page table or
    4.84 +page directory to a write-able page, we must ensure that no other CPU
    4.85 +still has the page in its TLB to ensure memory system integrity.  One
    4.86 +other issue for supporting MP guests is that we'll need some sort of
    4.87 +CPU gang scheduler, which will require some research.