direct-io.hg

view TODO @ 2299:8be25c10fa1e

bitkeeper revision 1.1159.42.4 (4124c9a2Di3cas2RmMeElljc94T5_A)

Merge scramble.cl.cam.ac.uk:/auto/groups/xeno/BK/xeno.bk
into scramble.cl.cam.ac.uk:/local/scratch/kaf24/xeno
author kaf24@scramble.cl.cam.ac.uk
date Thu Aug 19 15:39:14 2004 +0000 (2004-08-19)
parents 3ec9f0898ed8
children 7266d3bd3b1f ad13896e776c
line source
3 Known limitations and work in progress
4 ======================================
6 The current Xen Virtual Firewall Router (VFR) implementation in the
7 snapshot tree is very rudimentary, and in particular, lacks the RSIP
8 IP port-space sharing across domains that provides a better
9 alternative to NAT. There's a complete new implementation under
10 development which also supports much better logging and auditing
11 support. For now, if you want NAT, see the xen_nat_enable scripts and
12 get domain0 to do it for you.
14 There are also a number of memory management enhancements that didn't
15 make this release: We have plans for a "universal buffer cache" that
16 enables otherwise unused system memory to be used by domains in a
17 read-only fashion. We also have plans for inter-domain shared-memory
18 to enable high-performance bulk transport for cases where the usual
19 internal networking performance isn't good enough (e.g. communication
20 with a internal file server on another domain).
22 We have the equivalent of balloon driver functionality to control
23 domain's memory usage, enabling a domain to give back unused pages to
24 Xen. This needs properly documenting, and perhaps a way of domain0
25 signalling to a domain that it requires it to reduce its memory
26 footprint, rather than just the domain volunteering (see section on
27 the improved control interface).
29 The current disk scheduler is rather simplistic (batch round robin),
30 and could be replaced by e.g. Cello if we have QoS isolation
31 problems. For most things it seems to work OK, but there's currently
32 no service differentiation or weighting.
34 Currently, although Xen runs on SMP and SMT (hyperthreaded) machines,
35 the scheduling is far from smart -- domains are currently statically
36 assigned to a CPU when they are created (in a round robin fashion).
37 The scheduler needs to be modified such that before going idle a
38 logical CPU looks for work on other run queues (particularly on the
39 same physical CPU).
41 Xen currently only supports uniprocessor guest OSes. We have designed
42 the Xen interface with MP guests in mind, and plan to build an MP
43 Linux guest in due course. Basically, an MP guest would consist of
44 multiple scheduling domains (one per CPU) sharing a single memory
45 protection domain. The only extra complexity for the Xen VM system is
46 ensuring that when a page transitions from holding a page table or
47 page directory to a write-able page, we must ensure that no other CPU
48 still has the page in its TLB to ensure memory system integrity. One
49 other issue for supporting MP guests is that we'll need some sort of
50 CPU gang scheduler, which will require some research.