direct-io.hg

view xen/TODO @ 364:942eb9bcae13

bitkeeper revision 1.170 (3e9c936fXyHEI0NKOWQkP9tHN4sbqw)

Many files:
Finished virtualisation of x86 LDT. Xenolinux now exports this to applications (eg. for use by linuxthreads).
author kaf24@scramble.cl.cam.ac.uk
date Tue Apr 15 23:19:11 2003 +0000 (2003-04-15)
parents dc2e4de1850f
children f5c2415f65e3 5e482605e7d8
line source
2 This is stuff we probably want to implement in the near future. I
3 think I have them in a sensible priority order -- the first few would
4 be nice to fix before a code release. The later ones can be
5 longer-term goals.
7 -- Keir (16/3/03)
10 1. FIX HANDLING OF NETWORK RINGS
11 --------------------------------
12 Handling of the transmit rings is currently very broken (for example,
13 sending an inter-domain packet will wedge the hypervisor). This is
14 because we may handle packets out of order (eg. inter-domain packets
15 are handled eagerly, while packets for real interfaces are queued),
16 but our current ring design really assumes in-order handling.
18 A neat fix will be to allow responses to be queued in a different
19 order to requests, just as we already do with block-device
20 rings. We'll need to add an opaque identifier to ring entries,
21 allowing matching of requests and responses, but that's about it.
23 2. ACCURATE TIMERS AND WALL-CLOCK TIME
24 --------------------------------------
25 Currently our long-term timebase free runs on CPU0, with no external
26 calibration. We should run ntpd on domain 0 and allow this to warp
27 Xen's timebase. Once this is done, we can have a timebase per CPU and
28 not worry about relative drift (since they'll all get sync'ed
29 periodically by ntp).
31 3. ASSIGNING DOMAINS TO PROCESSORS
32 ----------------------------------
33 More intelligent assignment of domains to processors. In
34 particular, we don't play well with hyperthreading: we will assign
35 domains to virtual processors on the same package, rather then
36 spreading them across processor packages.
38 What we need to do is port code from Linux which stores information on
39 relationships between processors in the system (eg. which ones are
40 siblings in the same package). We then use this to balance domains
41 across packages, and across virtual processors within a package.
43 4. PROPER DESTRUCTION OF DOMAINS
44 --------------------------------
45 Currently we do not free resources when destroying a domain. This is
46 because they may be tied up in subsystems, and there is no way of
47 pulling them back in a safe manner.
49 The fix is probably to reference count resources and automatically
50 free them when the count reaches zero. We may get away with one count
51 per domain (for all its resources). When this reaches zero we know it
52 is safe to free everything: block-device rings, network rings, and all
53 the rest.
55 5. NETWORK CHECKSUM OFFLOAD
56 ---------------------------
57 All the NICs that we support can checksum packets on behalf of guest
58 OSes. We need to add appropriate flags to and from each domain to
59 indicate, on transmit, which packets need the checksum added and, on
60 receive, which packets have been checked out as okay. We can steal
61 Linux's interface, which is entirely sane given NIC limitations.
63 6. DOMAIN 0 MANAGEMENT DAEMON
64 -----------------------------
65 A better control daemon is required for domain 0, which keeps proper
66 track of machine resources and can make sensible policy choices. This
67 may require support in Xen; for example, notifications (eg. DOMn is
68 killed), and requests (eg. can DOMn allocate x frames of memory?).
70 7. MODULE SUPPORT FOR XEN
71 -------------------------
72 Network and blkdev drivers are bloating Xen. At some point we want to
73 build drivers as modules, stick them in a cheesy ramfs, then relocate
74 them one by one at boot time. If a driver duccessfully probes hardware
75 we keep it, otherwise we blow it away. Alternative is to have a
76 central PCI ID to driver name repository. We then use that to decide
77 which drivers to load.
79 Most of the hard stuff (relocating and the like) is done for us by
80 Linux's module system.
82 8. NEW DESIGN FEATURES
83 ----------------------
84 This includes the last-chance page cache, and the unified buffer cache.
88 Graveyard
89 *********
91 The hypervisor page cache
92 -------------------------
93 This will allow guest OSes to make use of spare pages in the system, but
94 allow them to be immediately used for any new domains or memory requests.
95 The idea is that, when a page is laundered and falls off Linux's clean_LRU
96 list, rather than freeing it it becomes a candidate for passing down into
97 the hypervisor. In return, xeno-linux may ask for one of its previously-
98 cached pages back:
99 (page, new_id) = cache_query(page, old_id);
100 If the requested page couldn't be kept, a blank page is returned.
101 When would Linux make the query? Whenever it wants a page back without
102 the delay or going to disc. Also, whenever a page would otherwise be
103 flushed to disc.
105 To try and add to the cache: (blank_page, new_id) = cache_query(page, NULL);
106 [NULL means "give me a blank page"].
107 To try and retrieve from the cache: (page, new_id) = cache_query(x_page, id)
108 [we may request that x_page just be discarded, and therefore not impinge
109 on this domain's cache quota].