direct-io.hg

view xen-2.4.16/README @ 203:6cdebb6f9876

bitkeeper revision 1.64 (3e50e515H574gxTCkK8Frnaoka-RTA)

fix horrendous complicated bug IAP couldn't.
author smh22@boulderdash.cl.cam.ac.uk
date Mon Feb 17 13:35:17 2003 +0000 (2003-02-17)
parents 1ef2026299c3
children
line source
2 *****************************************************
3 Xeno Hypervisor (18/7/02)
5 1) Tree layout
6 Looks rather like a simplified Linux :-)
7 Headers are in include/xeno and include asm-<arch>.
8 At build time we create symlinks:
9 include/linux -> include/xeno
10 include/asm -> include/asm-<arch>
11 In this way, Linux device drivers should need less tweaking of
12 their #include lines.
14 For source files, mapping between hypervisor and Linux is:
15 Linux Hypervisor
16 ----- ----------
17 kernel/init/mm/lib -> common
18 net/* -> net/*
19 drivers/* -> drivers/*
20 arch/* -> arch/*
22 Note that the use of #include <asm/...> and #include <linux/...> can
23 lead to confusion, as such files will often exist on the system include
24 path, even if a version doesn't exist within the hypervisor tree.
25 Unfortunately '-nostdinc' cannot be specified to the compiler, as that
26 prevents us using stdarg.h in the compiler's own header directory.
28 We try to not modify things in driver/* as much as possible, so we can
29 easily take updates from Linux. arch/* is basically straight from
30 Linux, with fingers in Linux-specific pies hacked off. common/* has
31 a lot of Linux code in it, but certain subsystems (task maintenance,
32 low-level memory handling) have been replaced. net/* contains enough
33 Linux-like gloop to get network drivers to work with little/no
34 modification.
36 2) Building
37 'make': Builds ELF executable called 'image' in base directory
38 'make install': gzip-compresses 'image' and copies it to TFTP server
39 'make clean': removes *all* build and target files
42 *****************************************************
43 Random thoughts and stuff from here down...
45 Todo list
46 ---------
47 * Hypervisor need only directly map its own memory pool
48 (maybe 128MB, tops). That would need 0x08000000....
49 This would allow 512MB Linux with plenty room for vmalloc'ed areas.
50 * Network device -- port drivers to hypervisor, implement virtual
51 driver for xeno-linux. Looks like Ethernet.
52 -- Hypervisor needs to do (at a minimum):
53 - packet filtering on tx (unicast IP only)
54 - packet demux on rx (unicast IP only)
55 - provide DHCP [maybedo something simpler?]
56 and ARP [at least for hypervisor IP address]
59 Segment descriptor tables
60 -------------------------
61 We want to allow guest OSes to specify GDT and LDT tables using their
62 own pages of memory (just like with page tables). So allow the following:
63 * new_table_entry(ptr, val)
64 [Allows insertion of a code, data, or LDT descriptor into given
65 location. Can simply be checked then poked, with no need to look at
66 page type.]
67 * new_GDT() -- relevent virtual pages are resolved to frames. Either
68 (i) page not present; or (ii) page is only mapped read-only and checks
69 out okay (then marked as special page). Old table is resolved first,
70 and the pages are unmarked (no longer special type).
71 * new_LDT() -- same as for new_GDT(), with same special page type.
73 Page table updates must be hooked, so we look for updates to virtual page
74 addresses in the GDT/LDT range. If map to not present, then old physpage
75 has type_count decremented. If map to present, ensure read-only, check the
76 page, and set special type.
78 Merge set_{LDT,GDT} into update_baseptr, by passing four args:
79 update_baseptrs(mask, ptab, gdttab, ldttab);
80 Update of ptab requires update of gtab (or set to internal default).
81 Update of gtab requires update of ltab (or set to internal default).
84 The hypervisor page cache
85 -------------------------
86 This will allow guest OSes to make use of spare pages in the system, but
87 allow them to be immediately used for any new domains or memory requests.
88 The idea is that, when a page is laundered and falls off Linux's clean_LRU
89 list, rather than freeing it it becomes a candidate for passing down into
90 the hypervisor. In return, xeno-linux may ask for one of its previously-
91 cached pages back:
92 (page, new_id) = cache_query(page, old_id);
93 If the requested page couldn't be kept, a blank page is returned.
94 When would Linux make the query? Whenever it wants a page back without
95 the delay or going to disc. Also, whenever a page would otherwise be
96 flushed to disc.
98 To try and add to the cache: (blank_page, new_id) = cache_query(page, NULL);
99 [NULL means "give me a blank page"].
100 To try and retrieve from the cache: (page, new_id) = cache_query(x_page, id)
101 [we may request that x_page just be discarded, and therefore not impinge
102 on this domain's cache quota].
105 Booting secondary processors
106 ----------------------------
108 start_of_day (i386/setup.c)
109 smp_boot_cpus (i386/smpboot.c)
110 * initialises boot CPU data
111 * parses APIC tables
112 * for each cpu:
113 do_boot_cpu (i386/smpboot.c)
114 * forks a new idle process
115 * points initial stack inside new task struct
116 * points initial EIP at a trampoline in very low memory
117 * frobs remote APIC....
119 On other processor:
120 * trampoline sets GDT and IDT
121 * jumps at main boot address with magic register value
122 * after setting proper page and descriptor tables, jumps at...
123 initialize_secondary (i386/smpboot.c)
124 * simply reads ESP/EIP out of the (new) idle task
125 * this causes a jump to...
126 start_secondary (i386/smpboot.c)
127 * reset all processor state
128 * barrier, then write bitmasks to signal back to boot cpu
129 * then barrel into...
130 cpu_idle (i386/process.c)
131 [THIS IS PROBABLY REASONABLE -- BOOT CPU SHOULD KICK
132 SECONDARIES TO GET WORK DONE]
135 SMP capabilities
136 ----------------
138 Current intention is to allow hypervisor to schedule on all processors in
139 SMP boxen, but to tie each domain to a single processor. This simplifies
140 many SMP intricacies both in terms of correctness and efficiency (eg.
141 TLB flushing, network packet delivery, ...).
143 Clients can still make use of SMP by installing multiple domains on a single
144 machine, and treating it as a fast cluster (at the very least, the
145 hypervisor will have fast routing of locally-destined packets).