annotate TODO @ 2686:c6c0f98bf7d3

bitkeeper revision 1.1159.120.2 (4177c0c45rkuaBFhtYj9E1LEJ8ti4w)

Unify 2.4 and 2.6 balloon drivers, xm can now control balloon in 2.4 domains.
More random docs work.
author mwilli2@equilibrium.research
date Thu Oct 21 13:59:32 2004 +0000 (2004-10-21)
parents 3ec9f0898ed8
children 7266d3bd3b1f ad13896e776c
rev   line source
iap10@736 1
iap10@736 2
iap10@736 3 Known limitations and work in progress
iap10@736 4 ======================================
iap10@736 5
iap10@773 6 The current Xen Virtual Firewall Router (VFR) implementation in the
iap10@773 7 snapshot tree is very rudimentary, and in particular, lacks the RSIP
iap10@773 8 IP port-space sharing across domains that provides a better
iap10@773 9 alternative to NAT. There's a complete new implementation under
iap10@773 10 development which also supports much better logging and auditing
iap10@773 11 support. For now, if you want NAT, see the xen_nat_enable scripts and
iap10@773 12 get domain0 to do it for you.
iap10@773 13
kaf24@978 14 There are also a number of memory management enhancements that didn't
kaf24@978 15 make this release: We have plans for a "universal buffer cache" that
kaf24@978 16 enables otherwise unused system memory to be used by domains in a
kaf24@978 17 read-only fashion. We also have plans for inter-domain shared-memory
kaf24@978 18 to enable high-performance bulk transport for cases where the usual
kaf24@978 19 internal networking performance isn't good enough (e.g. communication
kaf24@978 20 with a internal file server on another domain).
iap10@736 21
iap10@736 22 We have the equivalent of balloon driver functionality to control
iap10@736 23 domain's memory usage, enabling a domain to give back unused pages to
iap10@736 24 Xen. This needs properly documenting, and perhaps a way of domain0
iap10@736 25 signalling to a domain that it requires it to reduce its memory
iap10@773 26 footprint, rather than just the domain volunteering (see section on
iap10@773 27 the improved control interface).
iap10@736 28
iap10@736 29 The current disk scheduler is rather simplistic (batch round robin),
iap10@736 30 and could be replaced by e.g. Cello if we have QoS isolation
iap10@736 31 problems. For most things it seems to work OK, but there's currently
iap10@736 32 no service differentiation or weighting.
iap10@736 33
iap10@736 34 Currently, although Xen runs on SMP and SMT (hyperthreaded) machines,
iap10@736 35 the scheduling is far from smart -- domains are currently statically
iap10@736 36 assigned to a CPU when they are created (in a round robin fashion).
iap10@736 37 The scheduler needs to be modified such that before going idle a
iap10@736 38 logical CPU looks for work on other run queues (particularly on the
iap10@736 39 same physical CPU).
iap10@736 40
iap10@736 41 Xen currently only supports uniprocessor guest OSes. We have designed
iap10@736 42 the Xen interface with MP guests in mind, and plan to build an MP
iap10@736 43 Linux guest in due course. Basically, an MP guest would consist of
iap10@736 44 multiple scheduling domains (one per CPU) sharing a single memory
iap10@736 45 protection domain. The only extra complexity for the Xen VM system is
iap10@736 46 ensuring that when a page transitions from holding a page table or
iap10@736 47 page directory to a write-able page, we must ensure that no other CPU
iap10@736 48 still has the page in its TLB to ensure memory system integrity. One
iap10@736 49 other issue for supporting MP guests is that we'll need some sort of
iap10@736 50 CPU gang scheduler, which will require some research.