direct-io.hg

changeset 773:522d23c14a4f

bitkeeper revision 1.470 (3f77f25dLnNa4xk_syeLDfNYb2Woyw)

add examples, update TODO
author iap10@labyrinth.cl.cam.ac.uk
date Mon Sep 29 08:50:37 2003 +0000 (2003-09-29)
parents 5201797ee24a
children 907e5f55782d 9cdb7ea6e922
files .rootkeys TODO tools/control/Makefile tools/control/etc.exports-example
line diff
     1.1 --- a/.rootkeys	Mon Sep 29 00:48:36 2003 +0000
     1.2 +++ b/.rootkeys	Mon Sep 29 08:50:37 2003 +0000
     1.3 @@ -17,6 +17,7 @@ 3ec41f7ca6IBXDSe0HVcMPp3PPloOQ tools/con
     1.4  3f0d61da3O5gkcntbIOdUmN2-RcZbQ tools/control/doc/INSTALL-cmdline
     1.5  3eca6a96a31IwaKtkEa4jmzwTWlm8Q tools/control/doc/INSTALL-web
     1.6  3f0d61daCTHGCpQK0Brz3PAp80d_2Q tools/control/doc/USAGE-cmdline
     1.7 +3f77f25c4zdCalc5d0YnMGEnc9By-Q tools/control/etc.exports-example
     1.8  3f776bd12y6bW-wtcs6rD2qhdpT_Rw tools/control/examples/grub.conf-example
     1.9  3f776bd1RBu7Gnce6Bq9328QFUZBsw tools/control/examples/xen-mynewdom
    1.10  3eb781fcabCKRogwxJA3-jJKstw9Vg tools/control/examples/xenctl.xml
     2.1 --- a/TODO	Mon Sep 29 00:48:36 2003 +0000
     2.2 +++ b/TODO	Mon Sep 29 08:50:37 2003 +0000
     2.3 @@ -6,7 +6,8 @@ Known limitations and work in progress
     2.4  The "xenctl" tool used for controling domains is still rather clunky
     2.5  and not very user friendly. In particular, it should have an option to
     2.6  create and start a domain with all the necessary parameters set from a
     2.7 -named xml file.
     2.8 +named xml file. Update: the 'xenctl script' functionality combined
     2.9 +with the '-i' option to 'domain new' sort of does this.
    2.10  
    2.11  The java xenctl tool is really just a frontend for a bunch of C tools
    2.12  named xi_* that do the actual work of talking to Xen and setting stuff
    2.13 @@ -17,26 +18,32 @@ than xenctl but its arguably clearer wha
    2.14  There's also a nice web based interface for controlling domains that
    2.15  uses apache/tomcat. Unfortunately, this has fallen out of sync with
    2.16  respect to the underlying tools, so is currently not built by default
    2.17 -and needs fixing. 
    2.18 +and needs fixing. It shouldn't be hard to bring it up to date.
    2.19  
    2.20 -The current Virtual Firewall Router (VFR) implementation in the
    2.21 -snapshot tree is very rudimentary, and in particular, lacks the IP
    2.22 -port-space sharing across domains that we've proposed that promises to
    2.23 -provide a better alternative to NAT.  There's a complete new
    2.24 -implementation under development which also supports much better
    2.25 -logging and auditing support.  The current network scheduler is just
    2.26 -simple round-robin between domains, without any rate limiting or rate
    2.27 -guarantees. Dropping in a new scheduler should be straightforward, and
    2.28 -is planned as part of the VFRv2 work package.
    2.29 +The current Xen Virtual Firewall Router (VFR) implementation in the
    2.30 +snapshot tree is very rudimentary, and in particular, lacks the RSIP
    2.31 +IP port-space sharing across domains that provides a better
    2.32 +alternative to NAT.  There's a complete new implementation under
    2.33 +development which also supports much better logging and auditing
    2.34 +support. For now, if you want NAT, see the xen_nat_enable scripts and
    2.35 +get domain0 to do it for you.
    2.36 +
    2.37 +The current network scheduler is just simple round-robin between
    2.38 +domains, without any rate limiting or rate guarantees. Dropping in a
    2.39 +new scheduler is straightforward, and is planned as part of the
    2.40 +VFRv2 work package.
    2.41  
    2.42  Another area that needs further work is the interface between Xen and
    2.43  domain0 user space where the various XenoServer control daemons run.
    2.44  The current interface is somewhat ad-hoc, making use of various
    2.45  /proc/xeno entries that take a random assortment of arguments. We
    2.46  intend to reimplement this to provide a consistent means of feeding
    2.47 -back accounting and logging information to the control daemon. Also,
    2.48 -we should provide all domains with a read/write virtual console
    2.49 -interface -- currently for domains >1 it is output only.
    2.50 +back accounting and logging information to the control daemon, and
    2.51 +enabling control instructions to be sent the other way (e.g. domain 3:
    2.52 +reduce your memory footprint to 10000 pages. You have 1s to comply.)
    2.53 +We should also use the same interface to provide domains with a
    2.54 +read/write virtual console interface. The current implemenation is
    2.55 +output only, though domain0 can use the VGA console read/write.
    2.56  
    2.57  There's also a number of memory management hacks that didn't make this
    2.58  release: We have plans for a "universal buffer cache" that enables
    2.59 @@ -58,7 +65,8 @@ We have the equivalent of balloon driver
    2.60  domain's memory usage, enabling a domain to give back unused pages to
    2.61  Xen. This needs properly documenting, and perhaps a way of domain0
    2.62  signalling to a domain that it requires it to reduce its memory
    2.63 -footprint, rather than just the domain volunteering.
    2.64 +footprint, rather than just the domain volunteering (see section on
    2.65 +the improved control interface).
    2.66  
    2.67  The current disk scheduler is rather simplistic (batch round robin),
    2.68  and could be replaced by e.g. Cello if we have QoS isolation
    2.69 @@ -82,3 +90,22 @@ page directory to a write-able page, we 
    2.70  still has the page in its TLB to ensure memory system integrity.  One
    2.71  other issue for supporting MP guests is that we'll need some sort of
    2.72  CPU gang scheduler, which will require some research.
    2.73 +
    2.74 +Currently, the privileged domain0 can request access to the underlying
    2.75 +hardware. This is how we enable the VGA console and Xserver to run in
    2.76 +domain0. We are planning on extending this functionality to enable
    2.77 +other device drivers for 'low performance' devices to be run in
    2.78 +domain0, and then virtualized to other domains by domain0. This will
    2.79 +enable random PCMCIA and USB devices to be used that we're unlikely to
    2.80 +ever get around to writing a Xen driver for.
    2.81 +
    2.82 +We'd also like to experiment moving the network and block device
    2.83 +drivers out of Xen, and each into their own special domains that are
    2.84 +given access to the specific set of h/w resources they need to
    2.85 +operate.  This will provide some isolation against faulty device
    2.86 +drivers, potentially allowing them to be restarted on failure. There
    2.87 +may be more context switches incurred, but due to Xen's pipelined
    2.88 +asynchronous i/o interface we expect this overhead to be amortised.
    2.89 +This architecture would also allow device drivers to be easily
    2.90 +upgraded independent of Xen, which is necessary for our vision of Xen
    2.91 +as a next-gen BIOS replacement.
     3.1 --- a/tools/control/Makefile	Mon Sep 29 00:48:36 2003 +0000
     3.2 +++ b/tools/control/Makefile	Mon Sep 29 08:50:37 2003 +0000
     3.3 @@ -8,7 +8,7 @@ clean: clean-cmdline clean-web
     3.4  
     3.5  examples: FORCE
     3.6  	mkdir -p ../../../install/etc
     3.7 -	cp examples/xen* examples/grub.conf* ../../../install/etc/
     3.8 +	cp examples/xen* examples/*example ../../../install/etc/
     3.9  
    3.10  cmdline: FORCE
    3.11  	ant -buildfile build-cmdline.xml dist
     4.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
     4.2 +++ b/tools/control/etc.exports-example	Mon Sep 29 08:50:37 2003 +0000
     4.3 @@ -0,0 +1,20 @@
     4.4 +/local/roots/root1	169.254.1.1(rw,no_root_squash)
     4.5 +/local/roots/root2      169.254.1.2(rw,no_root_squash)
     4.6 +/local/roots/root3      169.254.1.3(rw,no_root_squash)
     4.7 +/local/roots/root4      169.254.1.4(rw,no_root_squash)
     4.8 +/local/roots/root5      169.254.1.5(rw,no_root_squash)
     4.9 +/local/roots/root6      169.254.1.6(rw,no_root_squash)
    4.10 +/local/roots/root7      169.254.1.7(rw,no_root_squash)
    4.11 +/local/roots/root8      169.254.1.8(rw,no_root_squash)
    4.12 +
    4.13 +#/usr			169.254.1/24(ro,no_root_squash)
    4.14 +
    4.15 +/local/usrs/usr1	169.254.1.1(rw,no_root_squash)
    4.16 +/local/usrs/usr2        169.254.1.2(rw,no_root_squash)
    4.17 +/local/usrs/usr3        169.254.1.3(rw,no_root_squash)
    4.18 +/local/usrs/usr4        169.254.1.4(rw,no_root_squash)
    4.19 +/local/usrs/usr5        169.254.1.5(rw,no_root_squash)
    4.20 +/local/usrs/usr6        169.254.1.6(rw,no_root_squash)
    4.21 +/local/usrs/usr7        169.254.1.7(rw,no_root_squash)
    4.22 +/local/usrs/usr8        169.254.1.8(rw,no_root_squash)
    4.23 +