ia64/xen-unstable

view TODO @ 752:016b73aec103

bitkeeper revision 1.452 (3f6ccb2erKfpPyYSBaAZ-LzKH1Bnog)

README.CD:
Warn about agpgart cideo cards.
author kaf24@scramble.cl.cam.ac.uk
date Sat Sep 20 21:48:30 2003 +0000 (2003-09-20)
parents afe5451d9f97
children 522d23c14a4f
line source
3 Known limitations and work in progress
4 ======================================
6 The "xenctl" tool used for controling domains is still rather clunky
7 and not very user friendly. In particular, it should have an option to
8 create and start a domain with all the necessary parameters set from a
9 named xml file.
11 The java xenctl tool is really just a frontend for a bunch of C tools
12 named xi_* that do the actual work of talking to Xen and setting stuff
13 up. Some local users prefer to drive the xi_ tools directly, typically
14 from simple shell scripts. These tools are even less user friendly
15 than xenctl but its arguably clearer what's going on.
17 There's also a nice web based interface for controlling domains that
18 uses apache/tomcat. Unfortunately, this has fallen out of sync with
19 respect to the underlying tools, so is currently not built by default
20 and needs fixing.
22 The current Virtual Firewall Router (VFR) implementation in the
23 snapshot tree is very rudimentary, and in particular, lacks the IP
24 port-space sharing across domains that we've proposed that promises to
25 provide a better alternative to NAT. There's a complete new
26 implementation under development which also supports much better
27 logging and auditing support. The current network scheduler is just
28 simple round-robin between domains, without any rate limiting or rate
29 guarantees. Dropping in a new scheduler should be straightforward, and
30 is planned as part of the VFRv2 work package.
32 Another area that needs further work is the interface between Xen and
33 domain0 user space where the various XenoServer control daemons run.
34 The current interface is somewhat ad-hoc, making use of various
35 /proc/xeno entries that take a random assortment of arguments. We
36 intend to reimplement this to provide a consistent means of feeding
37 back accounting and logging information to the control daemon. Also,
38 we should provide all domains with a read/write virtual console
39 interface -- currently for domains >1 it is output only.
41 There's also a number of memory management hacks that didn't make this
42 release: We have plans for a "universal buffer cache" that enables
43 otherwise unused system memory to be used by domains in a read-only
44 fashion. We also have plans for inter-domain shared-memory to enable
45 high-performance bulk transport for cases where the usual internal
46 networking performance isn't good enough (e.g. communication with a
47 internal file server on another domain).
49 We also have plans to implement domain suspend/resume-to-file. This is
50 basically an extension to the current domain building process to
51 enable domain0 to read out all of the domain's state and store it in a
52 file. There are complications here due to Xen's para-virtualised
53 design, whereby since the physical machine memory pages available to
54 the guest OS are likely to be different when the OS is resumed, we
55 need to re-write the page tables appropriately.
57 We have the equivalent of balloon driver functionality to control
58 domain's memory usage, enabling a domain to give back unused pages to
59 Xen. This needs properly documenting, and perhaps a way of domain0
60 signalling to a domain that it requires it to reduce its memory
61 footprint, rather than just the domain volunteering.
63 The current disk scheduler is rather simplistic (batch round robin),
64 and could be replaced by e.g. Cello if we have QoS isolation
65 problems. For most things it seems to work OK, but there's currently
66 no service differentiation or weighting.
68 Currently, although Xen runs on SMP and SMT (hyperthreaded) machines,
69 the scheduling is far from smart -- domains are currently statically
70 assigned to a CPU when they are created (in a round robin fashion).
71 The scheduler needs to be modified such that before going idle a
72 logical CPU looks for work on other run queues (particularly on the
73 same physical CPU).
75 Xen currently only supports uniprocessor guest OSes. We have designed
76 the Xen interface with MP guests in mind, and plan to build an MP
77 Linux guest in due course. Basically, an MP guest would consist of
78 multiple scheduling domains (one per CPU) sharing a single memory
79 protection domain. The only extra complexity for the Xen VM system is
80 ensuring that when a page transitions from holding a page table or
81 page directory to a write-able page, we must ensure that no other CPU
82 still has the page in its TLB to ensure memory system integrity. One
83 other issue for supporting MP guests is that we'll need some sort of
84 CPU gang scheduler, which will require some research.