ia64/xen-unstable

view docs/src/interface.tex @ 11128:f2f584093379

[POWERPC] Update .hgignore
Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
author kfraser@localhost.localdomain
date Tue Aug 15 10:38:59 2006 +0100 (2006-08-15)
parents 6d476981e3a5
children 26b673aeff8b
line source
1 \documentclass[11pt,twoside,final,openright]{report}
2 \usepackage{a4,graphicx,html,setspace,times}
3 \usepackage{comment,parskip}
4 \setstretch{1.15}
6 % LIBRARY FUNCTIONS
8 \newcommand{\hypercall}[1]{\vspace{2mm}{\sf #1}}
10 \begin{document}
12 % TITLE PAGE
13 \pagestyle{empty}
14 \begin{center}
15 \vspace*{\fill}
16 \includegraphics{figs/xenlogo.eps}
17 \vfill
18 \vfill
19 \vfill
20 \begin{tabular}{l}
21 {\Huge \bf Interface manual} \\[4mm]
22 {\huge Xen v3.0 for x86} \\[80mm]
24 {\Large Xen is Copyright (c) 2002-2005, The Xen Team} \\[3mm]
25 {\Large University of Cambridge, UK} \\[20mm]
26 \end{tabular}
27 \end{center}
29 {\bf DISCLAIMER: This documentation is always under active development
30 and as such there may be mistakes and omissions --- watch out for
31 these and please report any you find to the developer's mailing list.
32 The latest version is always available on-line. Contributions of
33 material, suggestions and corrections are welcome. }
35 \vfill
36 \cleardoublepage
38 % TABLE OF CONTENTS
39 \pagestyle{plain}
40 \pagenumbering{roman}
41 { \parskip 0pt plus 1pt
42 \tableofcontents }
43 \cleardoublepage
45 % PREPARE FOR MAIN TEXT
46 \pagenumbering{arabic}
47 \raggedbottom
48 \widowpenalty=10000
49 \clubpenalty=10000
50 \parindent=0pt
51 \parskip=5pt
52 \renewcommand{\topfraction}{.8}
53 \renewcommand{\bottomfraction}{.8}
54 \renewcommand{\textfraction}{.2}
55 \renewcommand{\floatpagefraction}{.8}
56 \setstretch{1.1}
58 \chapter{Introduction}
60 Xen allows the hardware resources of a machine to be virtualized and
61 dynamically partitioned, allowing multiple different {\em guest}
62 operating system images to be run simultaneously. Virtualizing the
63 machine in this manner provides considerable flexibility, for example
64 allowing different users to choose their preferred operating system
65 (e.g., Linux, NetBSD, or a custom operating system). Furthermore, Xen
66 provides secure partitioning between virtual machines (known as
67 {\em domains} in Xen terminology), and enables better resource
68 accounting and QoS isolation than can be achieved with a conventional
69 operating system.
71 Xen essentially takes a `whole machine' virtualization approach as
72 pioneered by IBM VM/370. However, unlike VM/370 or more recent
73 efforts such as VMware and Virtual PC, Xen does not attempt to
74 completely virtualize the underlying hardware. Instead parts of the
75 hosted guest operating systems are modified to work with the VMM; the
76 operating system is effectively ported to a new target architecture,
77 typically requiring changes in just the machine-dependent code. The
78 user-level API is unchanged, and so existing binaries and operating
79 system distributions work without modification.
81 In addition to exporting virtualized instances of CPU, memory, network
82 and block devices, Xen exposes a control interface to manage how these
83 resources are shared between the running domains. Access to the
84 control interface is restricted: it may only be used by one
85 specially-privileged VM, known as {\em domain 0}. This domain is a
86 required part of any Xen-based server and runs the application software
87 that manages the control-plane aspects of the platform. Running the
88 control software in {\it domain 0}, distinct from the hypervisor
89 itself, allows the Xen framework to separate the notions of
90 mechanism and policy within the system.
93 \chapter{Virtual Architecture}
95 In a Xen/x86 system, only the hypervisor runs with full processor
96 privileges ({\it ring 0} in the x86 four-ring model). It has full
97 access to the physical memory available in the system and is
98 responsible for allocating portions of it to running domains.
100 On a 32-bit x86 system, guest operating systems may use {\it rings 1},
101 {\it 2} and {\it 3} as they see fit. Segmentation is used to prevent
102 the guest OS from accessing the portion of the address space that is
103 reserved for Xen. We expect most guest operating systems will use
104 ring 1 for their own operation and place applications in ring 3.
106 On 64-bit systems it is not possible to protect the hypervisor from
107 untrusted guest code running in rings 1 and 2. Guests are therefore
108 restricted to run in ring 3 only. The guest kernel is protected from its
109 applications by context switching between the kernel and currently
110 running application.
112 In this chapter we consider the basic virtual architecture provided by
113 Xen: CPU state, exception and interrupt handling, and time.
114 Other aspects such as memory and device access are discussed in later
115 chapters.
118 \section{CPU state}
120 All privileged state must be handled by Xen. The guest OS has no
121 direct access to CR3 and is not permitted to update privileged bits in
122 EFLAGS. Guest OSes use \emph{hypercalls} to invoke operations in Xen;
123 these are analogous to system calls but occur from ring 1 to ring 0.
125 A list of all hypercalls is given in Appendix~\ref{a:hypercalls}.
128 \section{Exceptions}
130 A virtual IDT is provided --- a domain can submit a table of trap
131 handlers to Xen via the {\bf set\_trap\_table} hypercall. The
132 exception stack frame presented to a virtual trap handler is identical
133 to its native equivalent.
136 \section{Interrupts and events}
138 Interrupts are virtualized by mapping them to \emph{event channels},
139 which are delivered asynchronously to the target domain using a callback
140 supplied via the {\bf set\_callbacks} hypercall. A guest OS can map
141 these events onto its standard interrupt dispatch mechanisms. Xen is
142 responsible for determining the target domain that will handle each
143 physical interrupt source. For more details on the binding of event
144 sources to event channels, see Chapter~\ref{c:devices}.
147 \section{Time}
149 Guest operating systems need to be aware of the passage of both real
150 (or wallclock) time and their own `virtual time' (the time for which
151 they have been executing). Furthermore, Xen has a notion of time which
152 is used for scheduling. The following notions of time are provided:
154 \begin{description}
155 \item[Cycle counter time.]
157 This provides a fine-grained time reference. The cycle counter time
158 is used to accurately extrapolate the other time references. On SMP
159 machines it is currently assumed that the cycle counter time is
160 synchronized between CPUs. The current x86-based implementation
161 achieves this within inter-CPU communication latencies.
163 \item[System time.]
165 This is a 64-bit counter which holds the number of nanoseconds that
166 have elapsed since system boot.
168 \item[Wall clock time.]
170 This is the time of day in a Unix-style {\bf struct timeval}
171 (seconds and microseconds since 1 January 1970, adjusted by leap
172 seconds). An NTP client hosted by {\it domain 0} can keep this
173 value accurate.
175 \item[Domain virtual time.]
177 This progresses at the same pace as system time, but only while a
178 domain is executing --- it stops while a domain is de-scheduled.
179 Therefore the share of the CPU that a domain receives is indicated
180 by the rate at which its virtual time increases.
182 \end{description}
185 Xen exports timestamps for system time and wall-clock time to guest
186 operating systems through a shared page of memory. Xen also provides
187 the cycle counter time at the instant the timestamps were calculated,
188 and the CPU frequency in Hertz. This allows the guest to extrapolate
189 system and wall-clock times accurately based on the current cycle
190 counter time.
192 Since all time stamps need to be updated and read \emph{atomically}
193 a version number is also stored in the shared info page, which is
194 incremented before and after updating the timestamps. Thus a guest can
195 be sure that it read a consistent state by checking the two version
196 numbers are equal and even.
198 Xen includes a periodic ticker which sends a timer event to the
199 currently executing domain every 10ms. The Xen scheduler also sends a
200 timer event whenever a domain is scheduled; this allows the guest OS
201 to adjust for the time that has passed while it has been inactive. In
202 addition, Xen allows each domain to request that they receive a timer
203 event sent at a specified system time by using the {\bf
204 set\_timer\_op} hypercall. Guest OSes may use this timer to
205 implement timeout values when they block.
208 \section{Xen CPU Scheduling}
210 Xen offers a uniform API for CPU schedulers. It is possible to choose
211 from a number of schedulers at boot and it should be easy to add more.
212 The SEDF, BVT, and Credit schedulers are part of the normal Xen
213 distribution. BVT and SEDF will be going away and their use should be
214 avoided once the credit scheduler has stabilized and become the default.
215 The Credit scheduler provides proportional fair shares of the
216 host's CPUs to the running domains. It does this while transparently
217 load balancing runnable VCPUs across the whole system.
219 \paragraph*{Note: SMP host support}
220 Xen has always supported SMP host systems. When using the credit scheduler,
221 a domain's VCPUs will be dynamically moved across physical CPUs to maximise
222 domain and system throughput. VCPUs can also be manually restricted to be
223 mapped only on a subset of the host's physical CPUs, using the pinning
224 mechanism.
227 %% More information on the characteristics and use of these schedulers
228 %% is available in {\bf Sched-HOWTO.txt}.
231 \section{Privileged operations}
233 Xen exports an extended interface to privileged domains (viz.\ {\it
234 Domain 0}). This allows such domains to build and boot other domains
235 on the server, and provides control interfaces for managing
236 scheduling, memory, networking, and block devices.
238 \chapter{Memory}
239 \label{c:memory}
241 Xen is responsible for managing the allocation of physical memory to
242 domains, and for ensuring safe use of the paging and segmentation
243 hardware.
246 \section{Memory Allocation}
248 As well as allocating a portion of physical memory for its own private
249 use, Xen also reserves s small fixed portion of every virtual address
250 space. This is located in the top 64MB on 32-bit systems, the top
251 168MB on PAE systems, and a larger portion in the middle of the
252 address space on 64-bit systems. Unreserved physical memory is
253 available for allocation to domains at a page granularity. Xen tracks
254 the ownership and use of each page, which allows it to enforce secure
255 partitioning between domains.
257 Each domain has a maximum and current physical memory allocation. A
258 guest OS may run a `balloon driver' to dynamically adjust its current
259 memory allocation up to its limit.
262 \section{Pseudo-Physical Memory}
264 Since physical memory is allocated and freed on a page granularity,
265 there is no guarantee that a domain will receive a contiguous stretch
266 of physical memory. However most operating systems do not have good
267 support for operating in a fragmented physical address space. To aid
268 porting such operating systems to run on top of Xen, we make a
269 distinction between \emph{machine memory} and \emph{pseudo-physical
270 memory}.
272 Put simply, machine memory refers to the entire amount of memory
273 installed in the machine, including that reserved by Xen, in use by
274 various domains, or currently unallocated. We consider machine memory
275 to comprise a set of 4kB \emph{machine page frames} numbered
276 consecutively starting from 0. Machine frame numbers mean the same
277 within Xen or any domain.
279 Pseudo-physical memory, on the other hand, is a per-domain
280 abstraction. It allows a guest operating system to consider its memory
281 allocation to consist of a contiguous range of physical page frames
282 starting at physical frame 0, despite the fact that the underlying
283 machine page frames may be sparsely allocated and in any order.
285 To achieve this, Xen maintains a globally readable {\it
286 machine-to-physical} table which records the mapping from machine
287 page frames to pseudo-physical ones. In addition, each domain is
288 supplied with a {\it physical-to-machine} table which performs the
289 inverse mapping. Clearly the machine-to-physical table has size
290 proportional to the amount of RAM installed in the machine, while each
291 physical-to-machine table has size proportional to the memory
292 allocation of the given domain.
294 Architecture dependent code in guest operating systems can then use
295 the two tables to provide the abstraction of pseudo-physical memory.
296 In general, only certain specialized parts of the operating system
297 (such as page table management) needs to understand the difference
298 between machine and pseudo-physical addresses.
301 \section{Page Table Updates}
303 In the default mode of operation, Xen enforces read-only access to
304 page tables and requires guest operating systems to explicitly request
305 any modifications. Xen validates all such requests and only applies
306 updates that it deems safe. This is necessary to prevent domains from
307 adding arbitrary mappings to their page tables.
309 To aid validation, Xen associates a type and reference count with each
310 memory page. A page has one of the following mutually-exclusive types
311 at any point in time: page directory ({\sf PD}), page table ({\sf
312 PT}), local descriptor table ({\sf LDT}), global descriptor table
313 ({\sf GDT}), or writable ({\sf RW}). Note that a guest OS may always
314 create readable mappings of its own memory regardless of its current
315 type.
317 %%% XXX: possibly explain more about ref count 'lifecyle' here?
318 This mechanism is used to maintain the invariants required for safety;
319 for example, a domain cannot have a writable mapping to any part of a
320 page table as this would require the page concerned to simultaneously
321 be of types {\sf PT} and {\sf RW}.
323 \hypercall{mmu\_update(mmu\_update\_t *req, int count, int *success\_count, domid\_t domid)}
325 This hypercall is used to make updates to either the domain's
326 pagetables or to the machine to physical mapping table. It supports
327 submitting a queue of updates, allowing batching for maximal
328 performance. Explicitly queuing updates using this interface will
329 cause any outstanding writable pagetable state to be flushed from the
330 system.
332 \section{Writable Page Tables}
334 Xen also provides an alternative mode of operation in which guests
335 have the illusion that their page tables are directly writable. Of
336 course this is not really the case, since Xen must still validate
337 modifications to ensure secure partitioning. To this end, Xen traps
338 any write attempt to a memory page of type {\sf PT} (i.e., that is
339 currently part of a page table). If such an access occurs, Xen
340 temporarily allows write access to that page while at the same time
341 \emph{disconnecting} it from the page table that is currently in use.
342 This allows the guest to safely make updates to the page because the
343 newly-updated entries cannot be used by the MMU until Xen revalidates
344 and reconnects the page. Reconnection occurs automatically in a
345 number of situations: for example, when the guest modifies a different
346 page-table page, when the domain is preempted, or whenever the guest
347 uses Xen's explicit page-table update interfaces.
349 Writable pagetable functionality is enabled when the guest requests
350 it, using a {\bf vm\_assist} hypercall. Writable pagetables do {\em
351 not} provide full virtualisation of the MMU, so the memory management
352 code of the guest still needs to be aware that it is running on Xen.
353 Since the guest's page tables are used directly, it must translate
354 pseudo-physical addresses to real machine addresses when building page
355 table entries. The guest may not attempt to map its own pagetables
356 writably, since this would violate the memory type invariants; page
357 tables will automatically be made writable by the hypervisor, as
358 necessary.
360 \section{Shadow Page Tables}
362 Finally, Xen also supports a form of \emph{shadow page tables} in
363 which the guest OS uses a independent copy of page tables which are
364 unknown to the hardware (i.e.\ which are never pointed to by {\tt
365 cr3}). Instead Xen propagates changes made to the guest's tables to
366 the real ones, and vice versa. This is useful for logging page writes
367 (e.g.\ for live migration or checkpoint). A full version of the shadow
368 page tables also allows guest OS porting with less effort.
371 \section{Segment Descriptor Tables}
373 At start of day a guest is supplied with a default GDT, which does not reside
374 within its own memory allocation. If the guest wishes to use other
375 than the default `flat' ring-1 and ring-3 segments that this GDT
376 provides, it must register a custom GDT and/or LDT with Xen, allocated
377 from its own memory.
379 The following hypercall is used to specify a new GDT:
381 \begin{quote}
382 int {\bf set\_gdt}(unsigned long *{\em frame\_list}, int {\em
383 entries})
385 \emph{frame\_list}: An array of up to 14 machine page frames within
386 which the GDT resides. Any frame registered as a GDT frame may only
387 be mapped read-only within the guest's address space (e.g., no
388 writable mappings, no use as a page-table page, and so on). Only 14
389 pages may be specified because pages 15 and 16 are reserved for
390 the hypervisor's GDT entries.
392 \emph{entries}: The number of descriptor-entry slots in the GDT.
393 \end{quote}
395 The LDT is updated via the generic MMU update mechanism (i.e., via the
396 {\bf mmu\_update} hypercall.
398 \section{Start of Day}
400 The start-of-day environment for guest operating systems is rather
401 different to that provided by the underlying hardware. In particular,
402 the processor is already executing in protected mode with paging
403 enabled.
405 {\it Domain 0} is created and booted by Xen itself. For all subsequent
406 domains, the analogue of the boot-loader is the {\it domain builder},
407 user-space software running in {\it domain 0}. The domain builder is
408 responsible for building the initial page tables for a domain and
409 loading its kernel image at the appropriate virtual address.
411 \section{VM assists}
413 Xen provides a number of ``assists'' for guest memory management.
414 These are available on an ``opt-in'' basis to provide commonly-used
415 extra functionality to a guest.
417 \hypercall{vm\_assist(unsigned int cmd, unsigned int type)}
419 The {\bf cmd} parameter describes the action to be taken, whilst the
420 {\bf type} parameter describes the kind of assist that is being
421 referred to. Available commands are as follows:
423 \begin{description}
424 \item[VMASST\_CMD\_enable] Enable a particular assist type
425 \item[VMASST\_CMD\_disable] Disable a particular assist type
426 \end{description}
428 And the available types are:
430 \begin{description}
431 \item[VMASST\_TYPE\_4gb\_segments] Provide emulated support for
432 instructions that rely on 4GB segments (such as the techniques used
433 by some TLS solutions).
434 \item[VMASST\_TYPE\_4gb\_segments\_notify] Provide a callback to the
435 guest if the above segment fixups are used: allows the guest to
436 display a warning message during boot.
437 \item[VMASST\_TYPE\_writable\_pagetables] Enable writable pagetable
438 mode - described above.
439 \end{description}
442 \chapter{Xen Info Pages}
444 The {\bf Shared info page} is used to share various CPU-related state
445 between the guest OS and the hypervisor. This information includes VCPU
446 status, time information and event channel (virtual interrupt) state.
447 The {\bf Start info page} is used to pass build-time information to
448 the guest when it boots and when it is resumed from a suspended state.
449 This chapter documents the fields included in the {\bf
450 shared\_info\_t} and {\bf start\_info\_t} structures for use by the
451 guest OS.
453 \section{Shared info page}
455 The {\bf shared\_info\_t} is accessed at run time by both Xen and the
456 guest OS. It is used to pass information relating to the
457 virtual CPU and virtual machine state between the OS and the
458 hypervisor.
460 The structure is declared in {\bf xen/include/public/xen.h}:
462 \scriptsize
463 \begin{verbatim}
464 typedef struct shared_info {
465 vcpu_info_t vcpu_info[MAX_VIRT_CPUS];
467 /*
468 * A domain can create "event channels" on which it can send and receive
469 * asynchronous event notifications. There are three classes of event that
470 * are delivered by this mechanism:
471 * 1. Bi-directional inter- and intra-domain connections. Domains must
472 * arrange out-of-band to set up a connection (usually by allocating
473 * an unbound 'listener' port and avertising that via a storage service
474 * such as xenstore).
475 * 2. Physical interrupts. A domain with suitable hardware-access
476 * privileges can bind an event-channel port to a physical interrupt
477 * source.
478 * 3. Virtual interrupts ('events'). A domain can bind an event-channel
479 * port to a virtual interrupt source, such as the virtual-timer
480 * device or the emergency console.
481 *
482 * Event channels are addressed by a "port index". Each channel is
483 * associated with two bits of information:
484 * 1. PENDING -- notifies the domain that there is a pending notification
485 * to be processed. This bit is cleared by the guest.
486 * 2. MASK -- if this bit is clear then a 0->1 transition of PENDING
487 * will cause an asynchronous upcall to be scheduled. This bit is only
488 * updated by the guest. It is read-only within Xen. If a channel
489 * becomes pending while the channel is masked then the 'edge' is lost
490 * (i.e., when the channel is unmasked, the guest must manually handle
491 * pending notifications as no upcall will be scheduled by Xen).
492 *
493 * To expedite scanning of pending notifications, any 0->1 pending
494 * transition on an unmasked channel causes a corresponding bit in a
495 * per-vcpu selector word to be set. Each bit in the selector covers a
496 * 'C long' in the PENDING bitfield array.
497 */
498 unsigned long evtchn_pending[sizeof(unsigned long) * 8];
499 unsigned long evtchn_mask[sizeof(unsigned long) * 8];
501 /*
502 * Wallclock time: updated only by control software. Guests should base
503 * their gettimeofday() syscall on this wallclock-base value.
504 */
505 uint32_t wc_version; /* Version counter: see vcpu_time_info_t. */
506 uint32_t wc_sec; /* Secs 00:00:00 UTC, Jan 1, 1970. */
507 uint32_t wc_nsec; /* Nsecs 00:00:00 UTC, Jan 1, 1970. */
509 arch_shared_info_t arch;
511 } shared_info_t;
512 \end{verbatim}
513 \normalsize
515 \begin{description}
516 \item[vcpu\_info] An array of {\bf vcpu\_info\_t} structures, each of
517 which holds either runtime information about a virtual CPU, or is
518 ``empty'' if the corresponding VCPU does not exist.
519 \item[evtchn\_pending] Guest-global array, with one bit per event
520 channel. Bits are set if an event is currently pending on that
521 channel.
522 \item[evtchn\_mask] Guest-global array for masking notifications on
523 event channels.
524 \item[wc\_version] Version counter for current wallclock time.
525 \item[wc\_sec] Whole seconds component of current wallclock time.
526 \item[wc\_nsec] Nanoseconds component of current wallclock time.
527 \item[arch] Host architecture-dependent portion of the shared info
528 structure.
529 \end{description}
531 \subsection{vcpu\_info\_t}
533 \scriptsize
534 \begin{verbatim}
535 typedef struct vcpu_info {
536 /*
537 * 'evtchn_upcall_pending' is written non-zero by Xen to indicate
538 * a pending notification for a particular VCPU. It is then cleared
539 * by the guest OS /before/ checking for pending work, thus avoiding
540 * a set-and-check race. Note that the mask is only accessed by Xen
541 * on the CPU that is currently hosting the VCPU. This means that the
542 * pending and mask flags can be updated by the guest without special
543 * synchronisation (i.e., no need for the x86 LOCK prefix).
544 * This may seem suboptimal because if the pending flag is set by
545 * a different CPU then an IPI may be scheduled even when the mask
546 * is set. However, note:
547 * 1. The task of 'interrupt holdoff' is covered by the per-event-
548 * channel mask bits. A 'noisy' event that is continually being
549 * triggered can be masked at source at this very precise
550 * granularity.
551 * 2. The main purpose of the per-VCPU mask is therefore to restrict
552 * reentrant execution: whether for concurrency control, or to
553 * prevent unbounded stack usage. Whatever the purpose, we expect
554 * that the mask will be asserted only for short periods at a time,
555 * and so the likelihood of a 'spurious' IPI is suitably small.
556 * The mask is read before making an event upcall to the guest: a
557 * non-zero mask therefore guarantees that the VCPU will not receive
558 * an upcall activation. The mask is cleared when the VCPU requests
559 * to block: this avoids wakeup-waiting races.
560 */
561 uint8_t evtchn_upcall_pending;
562 uint8_t evtchn_upcall_mask;
563 unsigned long evtchn_pending_sel;
564 arch_vcpu_info_t arch;
565 vcpu_time_info_t time;
566 } vcpu_info_t; /* 64 bytes (x86) */
567 \end{verbatim}
568 \normalsize
570 \begin{description}
571 \item[evtchn\_upcall\_pending] This is set non-zero by Xen to indicate
572 that there are pending events to be received.
573 \item[evtchn\_upcall\_mask] This is set non-zero to disable all
574 interrupts for this CPU for short periods of time. If individual
575 event channels need to be masked, the {\bf evtchn\_mask} in the {\bf
576 shared\_info\_t} is used instead.
577 \item[evtchn\_pending\_sel] When an event is delivered to this VCPU, a
578 bit is set in this selector to indicate which word of the {\bf
579 evtchn\_pending} array in the {\bf shared\_info\_t} contains the
580 event in question.
581 \item[arch] Architecture-specific VCPU info. On x86 this contains the
582 virtualized CR2 register (page fault linear address) for this VCPU.
583 \item[time] Time values for this VCPU.
584 \end{description}
586 \subsection{vcpu\_time\_info}
588 \scriptsize
589 \begin{verbatim}
590 typedef struct vcpu_time_info {
591 /*
592 * Updates to the following values are preceded and followed by an
593 * increment of 'version'. The guest can therefore detect updates by
594 * looking for changes to 'version'. If the least-significant bit of
595 * the version number is set then an update is in progress and the guest
596 * must wait to read a consistent set of values.
597 * The correct way to interact with the version number is similar to
598 * Linux's seqlock: see the implementations of read_seqbegin/read_seqretry.
599 */
600 uint32_t version;
601 uint32_t pad0;
602 uint64_t tsc_timestamp; /* TSC at last update of time vals. */
603 uint64_t system_time; /* Time, in nanosecs, since boot. */
604 /*
605 * Current system time:
606 * system_time + ((tsc - tsc_timestamp) << tsc_shift) * tsc_to_system_mul
607 * CPU frequency (Hz):
608 * ((10^9 << 32) / tsc_to_system_mul) >> tsc_shift
609 */
610 uint32_t tsc_to_system_mul;
611 int8_t tsc_shift;
612 int8_t pad1[3];
613 } vcpu_time_info_t; /* 32 bytes */
614 \end{verbatim}
615 \normalsize
617 \begin{description}
618 \item[version] Used to ensure the guest gets consistent time updates.
619 \item[tsc\_timestamp] Cycle counter timestamp of last time value;
620 could be used to expolate in between updates, for instance.
621 \item[system\_time] Time since boot (nanoseconds).
622 \item[tsc\_to\_system\_mul] Cycle counter to nanoseconds multiplier
623 (used in extrapolating current time).
624 \item[tsc\_shift] Cycle counter to nanoseconds shift (used in
625 extrapolating current time).
626 \end{description}
628 \subsection{arch\_shared\_info\_t}
630 On x86, the {\bf arch\_shared\_info\_t} is defined as follows (from
631 xen/public/arch-x86\_32.h):
633 \scriptsize
634 \begin{verbatim}
635 typedef struct arch_shared_info {
636 unsigned long max_pfn; /* max pfn that appears in table */
637 /* Frame containing list of mfns containing list of mfns containing p2m. */
638 unsigned long pfn_to_mfn_frame_list_list;
639 } arch_shared_info_t;
640 \end{verbatim}
641 \normalsize
643 \begin{description}
644 \item[max\_pfn] The maximum PFN listed in the physical-to-machine
645 mapping table (P2M table).
646 \item[pfn\_to\_mfn\_frame\_list\_list] Machine address of the frame
647 that contains the machine addresses of the P2M table frames.
648 \end{description}
650 \section{Start info page}
652 The start info structure is declared as the following (in {\bf
653 xen/include/public/xen.h}):
655 \scriptsize
656 \begin{verbatim}
657 #define MAX_GUEST_CMDLINE 1024
658 typedef struct start_info {
659 /* THE FOLLOWING ARE FILLED IN BOTH ON INITIAL BOOT AND ON RESUME. */
660 char magic[32]; /* "Xen-<version>.<subversion>". */
661 unsigned long nr_pages; /* Total pages allocated to this domain. */
662 unsigned long shared_info; /* MACHINE address of shared info struct. */
663 uint32_t flags; /* SIF_xxx flags. */
664 unsigned long store_mfn; /* MACHINE page number of shared page. */
665 uint32_t store_evtchn; /* Event channel for store communication. */
666 unsigned long console_mfn; /* MACHINE address of console page. */
667 uint32_t console_evtchn; /* Event channel for console messages. */
668 /* THE FOLLOWING ARE ONLY FILLED IN ON INITIAL BOOT (NOT RESUME). */
669 unsigned long pt_base; /* VIRTUAL address of page directory. */
670 unsigned long nr_pt_frames; /* Number of bootstrap p.t. frames. */
671 unsigned long mfn_list; /* VIRTUAL address of page-frame list. */
672 unsigned long mod_start; /* VIRTUAL address of pre-loaded module. */
673 unsigned long mod_len; /* Size (bytes) of pre-loaded module. */
674 int8_t cmd_line[MAX_GUEST_CMDLINE];
675 } start_info_t;
676 \end{verbatim}
677 \normalsize
679 The fields are in two groups: the first group are always filled in
680 when a domain is booted or resumed, the second set are only used at
681 boot time.
683 The always-available group is as follows:
685 \begin{description}
686 \item[magic] A text string identifying the Xen version to the guest.
687 \item[nr\_pages] The number of real machine pages available to the
688 guest.
689 \item[shared\_info] Machine address of the shared info structure,
690 allowing the guest to map it during initialisation.
691 \item[flags] Flags for describing optional extra settings to the
692 guest.
693 \item[store\_mfn] Machine address of the Xenstore communications page.
694 \item[store\_evtchn] Event channel to communicate with the store.
695 \item[console\_mfn] Machine address of the console data page.
696 \item[console\_evtchn] Event channel to notify the console backend.
697 \end{description}
699 The boot-only group may only be safely referred to during system boot:
701 \begin{description}
702 \item[pt\_base] Virtual address of the page directory created for us
703 by the domain builder.
704 \item[nr\_pt\_frames] Number of frames used by the builders' bootstrap
705 pagetables.
706 \item[mfn\_list] Virtual address of the list of machine frames this
707 domain owns.
708 \item[mod\_start] Virtual address of any pre-loaded modules
709 (e.g. ramdisk)
710 \item[mod\_len] Size of pre-loaded module (if any).
711 \item[cmd\_line] Kernel command line passed by the domain builder.
712 \end{description}
715 % by Mark Williamson <mark.williamson@cl.cam.ac.uk>
717 \chapter{Event Channels}
718 \label{c:eventchannels}
720 Event channels are the basic primitive provided by Xen for event
721 notifications. An event is the Xen equivalent of a hardware
722 interrupt. They essentially store one bit of information, the event
723 of interest is signalled by transitioning this bit from 0 to 1.
725 Notifications are received by a guest via an upcall from Xen,
726 indicating when an event arrives (setting the bit). Further
727 notifications are masked until the bit is cleared again (therefore,
728 guests must check the value of the bit after re-enabling event
729 delivery to ensure no missed notifications).
731 Event notifications can be masked by setting a flag; this is
732 equivalent to disabling interrupts and can be used to ensure atomicity
733 of certain operations in the guest kernel.
735 \section{Hypercall interface}
737 \hypercall{event\_channel\_op(evtchn\_op\_t *op)}
739 The event channel operation hypercall is used for all operations on
740 event channels / ports. Operations are distinguished by the value of
741 the {\bf cmd} field of the {\bf op} structure. The possible commands
742 are described below:
744 \begin{description}
746 \item[EVTCHNOP\_alloc\_unbound]
747 Allocate a new event channel port, ready to be connected to by a
748 remote domain.
749 \begin{itemize}
750 \item Specified domain must exist.
751 \item A free port must exist in that domain.
752 \end{itemize}
753 Unprivileged domains may only allocate their own ports, privileged
754 domains may also allocate ports in other domains.
755 \item[EVTCHNOP\_bind\_interdomain]
756 Bind an event channel for interdomain communications.
757 \begin{itemize}
758 \item Caller domain must have a free port to bind.
759 \item Remote domain must exist.
760 \item Remote port must be allocated and currently unbound.
761 \item Remote port must be expecting the caller domain as the ``remote''.
762 \end{itemize}
763 \item[EVTCHNOP\_bind\_virq]
764 Allocate a port and bind a VIRQ to it.
765 \begin{itemize}
766 \item Caller domain must have a free port to bind.
767 \item VIRQ must be valid.
768 \item VCPU must exist.
769 \item VIRQ must not currently be bound to an event channel.
770 \end{itemize}
771 \item[EVTCHNOP\_bind\_ipi]
772 Allocate and bind a port for notifying other virtual CPUs.
773 \begin{itemize}
774 \item Caller domain must have a free port to bind.
775 \item VCPU must exist.
776 \end{itemize}
777 \item[EVTCHNOP\_bind\_pirq]
778 Allocate and bind a port to a real IRQ.
779 \begin{itemize}
780 \item Caller domain must have a free port to bind.
781 \item PIRQ must be within the valid range.
782 \item Another binding for this PIRQ must not exist for this domain.
783 \item Caller must have an available port.
784 \end{itemize}
785 \item[EVTCHNOP\_close]
786 Close an event channel (no more events will be received).
787 \begin{itemize}
788 \item Port must be valid (currently allocated).
789 \end{itemize}
790 \item[EVTCHNOP\_send] Send a notification on an event channel attached
791 to a port.
792 \begin{itemize}
793 \item Port must be valid.
794 \item Only valid for Interdomain, IPI or Allocated Unbound ports.
795 \end{itemize}
796 \item[EVTCHNOP\_status] Query the status of a port; what kind of port,
797 whether it is bound, what remote domain is expected, what PIRQ or
798 VIRQ it is bound to, what VCPU will be notified, etc.
799 Unprivileged domains may only query the state of their own ports.
800 Privileged domains may query any port.
801 \item[EVTCHNOP\_bind\_vcpu] Bind event channel to a particular VCPU -
802 receive notification upcalls only on that VCPU.
803 \begin{itemize}
804 \item VCPU must exist.
805 \item Port must be valid.
806 \item Event channel must be either: allocated but unbound, bound to
807 an interdomain event channel, bound to a PIRQ.
808 \end{itemize}
810 \end{description}
812 %%
813 %% grant_tables.tex
814 %%
815 %% Made by Mark Williamson
816 %% Login <mark@maw48>
817 %%
819 \chapter{Grant tables}
820 \label{c:granttables}
822 Xen's grant tables provide a generic mechanism to memory sharing
823 between domains. This shared memory interface underpins the split
824 device drivers for block and network IO.
826 Each domain has its own {\bf grant table}. This is a data structure
827 that is shared with Xen; it allows the domain to tell Xen what kind of
828 permissions other domains have on its pages. Entries in the grant
829 table are identified by {\bf grant references}. A grant reference is
830 an integer, which indexes into the grant table. It acts as a
831 capability which the grantee can use to perform operations on the
832 granter's memory.
834 This capability-based system allows shared-memory communications
835 between unprivileged domains. A grant reference also encapsulates the
836 details of a shared page, removing the need for a domain to know the
837 real machine address of a page it is sharing. This makes it possible
838 to share memory correctly with domains running in fully virtualised
839 memory.
841 \section{Interface}
843 \subsection{Grant table manipulation}
845 Creating and destroying grant references is done by direct access to
846 the grant table. This removes the need to involve Xen when creating
847 grant references, modifying access permissions, etc. The grantee
848 domain will invoke hypercalls to use the grant references. Four main
849 operations can be accomplished by directly manipulating the table:
851 \begin{description}
852 \item[Grant foreign access] allocate a new entry in the grant table
853 and fill out the access permissions accordingly. The access
854 permissions will be looked up by Xen when the grantee attempts to
855 use the reference to map the granted frame.
856 \item[End foreign access] check that the grant reference is not
857 currently in use, then remove the mapping permissions for the frame.
858 This prevents further mappings from taking place but does not allow
859 forced revocations of existing mappings.
860 \item[Grant foreign transfer] allocate a new entry in the table
861 specifying transfer permissions for the grantee. Xen will look up
862 this entry when the grantee attempts to transfer a frame to the
863 granter.
864 \item[End foreign transfer] remove permissions to prevent a transfer
865 occurring in future. If the transfer is already committed,
866 modifying the grant table cannot prevent it from completing.
867 \end{description}
869 \subsection{Hypercalls}
871 Use of grant references is accomplished via a hypercall. The grant
872 table op hypercall takes three arguments:
874 \hypercall{grant\_table\_op(unsigned int cmd, void *uop, unsigned int count)}
876 {\bf cmd} indicates the grant table operation of interest. {\bf uop}
877 is a pointer to a structure (or an array of structures) describing the
878 operation to be performed. The {\bf count} field describes how many
879 grant table operations are being batched together.
881 The core logic is situated in {\bf xen/common/grant\_table.c}. The
882 grant table operation hypercall can be used to perform the following
883 actions:
885 \begin{description}
886 \item[GNTTABOP\_map\_grant\_ref] Given a grant reference from another
887 domain, map the referred page into the caller's address space.
888 \item[GNTTABOP\_unmap\_grant\_ref] Remove a mapping to a granted frame
889 from the caller's address space. This is used to voluntarily
890 relinquish a mapping to a granted page.
891 \item[GNTTABOP\_setup\_table] Setup grant table for caller domain.
892 \item[GNTTABOP\_dump\_table] Debugging operation.
893 \item[GNTTABOP\_transfer] Given a transfer reference from another
894 domain, transfer ownership of a page frame to that domain.
895 \end{description}
897 %%
898 %% xenstore.tex
899 %%
900 %% Made by Mark Williamson
901 %% Login <mark@maw48>
902 %%
904 \chapter{Xenstore}
906 Xenstore is the mechanism by which control-plane activities occur.
907 These activities include:
909 \begin{itemize}
910 \item Setting up shared memory regions and event channels for use with
911 the split device drivers.
912 \item Notifying the guest of control events (e.g. balloon driver
913 requests)
914 \item Reporting back status information from the guest
915 (e.g. performance-related statistics, etc).
916 \end{itemize}
918 The store is arranged as a hierachical collection of key-value pairs.
919 Each domain has a directory hierarchy containing data related to its
920 configuration. Domains are permitted to register for notifications
921 about changes in subtrees of the store, and to apply changes to the
922 store transactionally.
924 \section{Guidelines}
926 A few principles govern the operation of the store:
928 \begin{itemize}
929 \item Domains should only modify the contents of their own
930 directories.
931 \item The setup protocol for a device channel should simply consist of
932 entering the configuration data into the store.
933 \item The store should allow device discovery without requiring the
934 relevant device drivers to be loaded: a Xen ``bus'' should be
935 visible to probing code in the guest.
936 \item The store should be usable for inter-tool communications,
937 allowing the tools themselves to be decomposed into a number of
938 smaller utilities, rather than a single monolithic entity. This
939 also facilitates the development of alternate user interfaces to the
940 same functionality.
941 \end{itemize}
943 \section{Store layout}
945 There are three main paths in XenStore:
947 \begin{description}
948 \item[/vm] stores configuration information about domain
949 \item[/local/domain] stores information about the domain on the local node (domid, etc.)
950 \item[/tool] stores information for the various tools
951 \end{description}
953 The {\bf /vm} path stores configuration information for a domain.
954 This information doesn't change and is indexed by the domain's UUID.
955 A {\bf /vm} entry contains the following information:
957 \begin{description}
958 \item[ssidref] ssid reference for domain
959 \item[uuid] uuid of the domain (somewhat redundant)
960 \item[on\_reboot] the action to take on a domain reboot request (destroy or restart)
961 \item[on\_poweroff] the action to take on a domain halt request (destroy or restart)
962 \item[on\_crash] the action to take on a domain crash (destroy or restart)
963 \item[vcpus] the number of allocated vcpus for the domain
964 \item[memory] the amount of memory (in megabytes) for the domain Note: appears to sometimes be empty for domain-0
965 \item[vcpu\_avail] the number of active vcpus for the domain (vcpus - number of disabled vcpus)
966 \item[name] the name of the domain
967 \end{description}
970 {\bf /vm/$<$uuid$>$/image/}
972 The image path is only available for Domain-Us and contains:
973 \begin{description}
974 \item[ostype] identifies the builder type (linux or vmx)
975 \item[kernel] path to kernel on domain-0
976 \item[cmdline] command line to pass to domain-U kernel
977 \item[ramdisk] path to ramdisk on domain-0
978 \end{description}
980 {\bf /local}
982 The {\tt /local} path currently only contains one directory, {\tt
983 /local/domain} that is indexed by domain id. It contains the running
984 domain information. The reason to have two storage areas is that
985 during migration, the uuid doesn't change but the domain id does. The
986 {\tt /local/domain} directory can be created and populated before
987 finalizing the migration enabling localhost to localhost migration.
989 {\bf /local/domain/$<$domid$>$}
991 This path contains:
993 \begin{description}
994 \item[cpu\_time] xend start time (this is only around for domain-0)
995 \item[handle] private handle for xend
996 \item[name] see /vm
997 \item[on\_reboot] see /vm
998 \item[on\_poweroff] see /vm
999 \item[on\_crash] see /vm
1000 \item[vm] the path to the VM directory for the domain
1001 \item[domid] the domain id (somewhat redundant)
1002 \item[running] indicates that the domain is currently running
1003 \item[memory] the current memory in megabytes for the domain (empty for domain-0?)
1004 \item[maxmem\_KiB] the maximum memory for the domain (in kilobytes)
1005 \item[memory\_KiB] the memory allocated to the domain (in kilobytes)
1006 \item[cpu] the current CPU the domain is pinned to (empty for domain-0?)
1007 \item[cpu\_weight] the weight assigned to the domain
1008 \item[vcpu\_avail] a bitmap telling the domain whether it may use a given VCPU
1009 \item[online\_vcpus] how many vcpus are currently online
1010 \item[vcpus] the total number of vcpus allocated to the domain
1011 \item[console/] a directory for console information
1012 \begin{description}
1013 \item[ring-ref] the grant table reference of the console ring queue
1014 \item[port] the event channel being used for the console ring queue (local port)
1015 \item[tty] the current tty the console data is being exposed of
1016 \item[limit] the limit (in bytes) of console data to buffer
1017 \end{description}
1018 \item[backend/] a directory containing all backends the domain hosts
1019 \begin{description}
1020 \item[vbd/] a directory containing vbd backends
1021 \begin{description}
1022 \item[$<$domid$>$/] a directory containing vbd's for domid
1023 \begin{description}
1024 \item[$<$virtual-device$>$/] a directory for a particular
1025 virtual-device on domid
1026 \begin{description}
1027 \item[frontend-id] domain id of frontend
1028 \item[frontend] the path to the frontend domain
1029 \item[physical-device] backend device number
1030 \item[sector-size] backend sector size
1031 \item[info] 0 read/write, 1 read-only (is this right?)
1032 \item[domain] name of frontend domain
1033 \item[params] parameters for device
1034 \item[type] the type of the device
1035 \item[dev] the virtual device (as given by the user)
1036 \item[node] output from block creation script
1037 \end{description}
1038 \end{description}
1039 \end{description}
1041 \item[vif/] a directory containing vif backends
1042 \begin{description}
1043 \item[$<$domid$>$/] a directory containing vif's for domid
1044 \begin{description}
1045 \item[$<$vif number$>$/] a directory for each vif
1046 \item[frontend-id] the domain id of the frontend
1047 \item[frontend] the path to the frontend
1048 \item[mac] the mac address of the vif
1049 \item[bridge] the bridge the vif is connected to
1050 \item[handle] the handle of the vif
1051 \item[script] the script used to create/stop the vif
1052 \item[domain] the name of the frontend
1053 \end{description}
1054 \end{description}
1056 \item[vtpm/] a directory containin vtpm backends
1057 \begin{description}
1058 \item[$<$domid$>$/] a directory containing vtpm's for domid
1059 \begin{description}
1060 \item[$<$vtpm number$>$/] a directory for each vtpm
1061 \item[frontend-id] the domain id of the frontend
1062 \item[frontend] the path to the frontend
1063 \item[instance] the instance of the virtual TPM that is used
1064 \item[pref{\textunderscore}instance] the instance number as given in the VM configuration file;
1065 may be different from {\bf instance}
1066 \item[domain] the name of the domain of the frontend
1067 \end{description}
1068 \end{description}
1070 \end{description}
1072 \item[device/] a directory containing the frontend devices for the
1073 domain
1074 \begin{description}
1075 \item[vbd/] a directory containing vbd frontend devices for the
1076 domain
1077 \begin{description}
1078 \item[$<$virtual-device$>$/] a directory containing the vbd frontend for
1079 virtual-device
1080 \begin{description}
1081 \item[virtual-device] the device number of the frontend device
1082 \item[backend-id] the domain id of the backend
1083 \item[backend] the path of the backend in the store (/local/domain
1084 path)
1085 \item[ring-ref] the grant table reference for the block request
1086 ring queue
1087 \item[event-channel] the event channel used for the block request
1088 ring queue
1089 \end{description}
1091 \item[vif/] a directory containing vif frontend devices for the
1092 domain
1093 \begin{description}
1094 \item[$<$id$>$/] a directory for vif id frontend device for the domain
1095 \begin{description}
1096 \item[backend-id] the backend domain id
1097 \item[mac] the mac address of the vif
1098 \item[handle] the internal vif handle
1099 \item[backend] a path to the backend's store entry
1100 \item[tx-ring-ref] the grant table reference for the transmission ring queue
1101 \item[rx-ring-ref] the grant table reference for the receiving ring queue
1102 \item[event-channel] the event channel used for the two ring queues
1103 \end{description}
1104 \end{description}
1106 \item[vtpm/] a directory containing the vtpm frontend device for the
1107 domain
1108 \begin{description}
1109 \item[$<$id$>$] a directory for vtpm id frontend device for the domain
1110 \begin{description}
1111 \item[backend-id] the backend domain id
1112 \item[backend] a path to the backend's store entry
1113 \item[ring-ref] the grant table reference for the tx/rx ring
1114 \item[event-channel] the event channel used for the ring
1115 \end{description}
1116 \end{description}
1118 \item[device-misc/] miscellanous information for devices
1119 \begin{description}
1120 \item[vif/] miscellanous information for vif devices
1121 \begin{description}
1122 \item[nextDeviceID] the next device id to use
1123 \end{description}
1124 \end{description}
1125 \end{description}
1126 \end{description}
1128 \item[store/] per-domain information for the store
1129 \begin{description}
1130 \item[port] the event channel used for the store ring queue
1131 \item[ring-ref] - the grant table reference used for the store's
1132 communication channel
1133 \end{description}
1135 \item[image] - private xend information
1136 \end{description}
1139 \chapter{Devices}
1140 \label{c:devices}
1142 Virtual devices under Xen are provided by a {\bf split device driver}
1143 architecture. The illusion of the virtual device is provided by two
1144 co-operating drivers: the {\bf frontend}, which runs an the
1145 unprivileged domain and the {\bf backend}, which runs in a domain with
1146 access to the real device hardware (often called a {\bf driver
1147 domain}; in practice domain 0 usually fulfills this function).
1149 The frontend driver appears to the unprivileged guest as if it were a
1150 real device, for instance a block or network device. It receives IO
1151 requests from its kernel as usual, however since it does not have
1152 access to the physical hardware of the system it must then issue
1153 requests to the backend. The backend driver is responsible for
1154 receiving these IO requests, verifying that they are safe and then
1155 issuing them to the real device hardware. The backend driver appears
1156 to its kernel as a normal user of in-kernel IO functionality. When
1157 the IO completes the backend notifies the frontend that the data is
1158 ready for use; the frontend is then able to report IO completion to
1159 its own kernel.
1161 Frontend drivers are designed to be simple; most of the complexity is
1162 in the backend, which has responsibility for translating device
1163 addresses, verifying that requests are well-formed and do not violate
1164 isolation guarantees, etc.
1166 Split drivers exchange requests and responses in shared memory, with
1167 an event channel for asynchronous notifications of activity. When the
1168 frontend driver comes up, it uses Xenstore to set up a shared memory
1169 frame and an interdomain event channel for communications with the
1170 backend. Once this connection is established, the two can communicate
1171 directly by placing requests / responses into shared memory and then
1172 sending notifications on the event channel. This separation of
1173 notification from data transfer allows message batching, and results
1174 in very efficient device access.
1176 This chapter focuses on some individual split device interfaces
1177 available to Xen guests.
1180 \section{Network I/O}
1182 Virtual network device services are provided by shared memory
1183 communication with a backend domain. From the point of view of other
1184 domains, the backend may be viewed as a virtual ethernet switch
1185 element with each domain having one or more virtual network interfaces
1186 connected to it.
1188 From the point of view of the backend domain itself, the network
1189 backend driver consists of a number of ethernet devices. Each of
1190 these has a logical direct connection to a virtual network device in
1191 another domain. This allows the backend domain to route, bridge,
1192 firewall, etc the traffic to / from the other domains using normal
1193 operating system mechanisms.
1195 \subsection{Backend Packet Handling}
1197 The backend driver is responsible for a variety of actions relating to
1198 the transmission and reception of packets from the physical device.
1199 With regard to transmission, the backend performs these key actions:
1201 \begin{itemize}
1202 \item {\bf Validation:} To ensure that domains do not attempt to
1203 generate invalid (e.g. spoofed) traffic, the backend driver may
1204 validate headers ensuring that source MAC and IP addresses match the
1205 interface that they have been sent from.
1207 Validation functions can be configured using standard firewall rules
1208 ({\small{\tt iptables}} in the case of Linux).
1210 \item {\bf Scheduling:} Since a number of domains can share a single
1211 physical network interface, the backend must mediate access when
1212 several domains each have packets queued for transmission. This
1213 general scheduling function subsumes basic shaping or rate-limiting
1214 schemes.
1216 \item {\bf Logging and Accounting:} The backend domain can be
1217 configured with classifier rules that control how packets are
1218 accounted or logged. For example, log messages might be generated
1219 whenever a domain attempts to send a TCP packet containing a SYN.
1220 \end{itemize}
1222 On receipt of incoming packets, the backend acts as a simple
1223 demultiplexer: Packets are passed to the appropriate virtual interface
1224 after any necessary logging and accounting have been carried out.
1226 \subsection{Data Transfer}
1228 Each virtual interface uses two ``descriptor rings'', one for
1229 transmit, the other for receive. Each descriptor identifies a block
1230 of contiguous machine memory allocated to the domain.
1232 The transmit ring carries packets to transmit from the guest to the
1233 backend domain. The return path of the transmit ring carries messages
1234 indicating that the contents have been physically transmitted and the
1235 backend no longer requires the associated pages of memory.
1237 To receive packets, the guest places descriptors of unused pages on
1238 the receive ring. The backend will return received packets by
1239 exchanging these pages in the domain's memory with new pages
1240 containing the received data, and passing back descriptors regarding
1241 the new packets on the ring. This zero-copy approach allows the
1242 backend to maintain a pool of free pages to receive packets into, and
1243 then deliver them to appropriate domains after examining their
1244 headers.
1246 % Real physical addresses are used throughout, with the domain
1247 % performing translation from pseudo-physical addresses if that is
1248 % necessary.
1250 If a domain does not keep its receive ring stocked with empty buffers
1251 then packets destined to it may be dropped. This provides some
1252 defence against receive livelock problems because an overloaded domain
1253 will cease to receive further data. Similarly, on the transmit path,
1254 it provides the application with feedback on the rate at which packets
1255 are able to leave the system.
1257 Flow control on rings is achieved by including a pair of producer
1258 indexes on the shared ring page. Each side will maintain a private
1259 consumer index indicating the next outstanding message. In this
1260 manner, the domains cooperate to divide the ring into two message
1261 lists, one in each direction. Notification is decoupled from the
1262 immediate placement of new messages on the ring; the event channel
1263 will be used to generate notification when {\em either} a certain
1264 number of outstanding messages are queued, {\em or} a specified number
1265 of nanoseconds have elapsed since the oldest message was placed on the
1266 ring.
1268 %% Not sure if my version is any better -- here is what was here
1269 %% before: Synchronization between the backend domain and the guest is
1270 %% achieved using counters held in shared memory that is accessible to
1271 %% both. Each ring has associated producer and consumer indices
1272 %% indicating the area in the ring that holds descriptors that contain
1273 %% data. After receiving {\it n} packets or {\t nanoseconds} after
1274 %% receiving the first packet, the hypervisor sends an event to the
1275 %% domain.
1278 \subsection{Network ring interface}
1280 The network device uses two shared memory rings for communication: one
1281 for transmit, one for receieve.
1283 Transmit requests are described by the following structure:
1285 \scriptsize
1286 \begin{verbatim}
1287 typedef struct netif_tx_request {
1288 grant_ref_t gref; /* Reference to buffer page */
1289 uint16_t offset; /* Offset within buffer page */
1290 uint16_t flags; /* NETTXF_* */
1291 uint16_t id; /* Echoed in response message. */
1292 uint16_t size; /* Packet size in bytes. */
1293 } netif_tx_request_t;
1294 \end{verbatim}
1295 \normalsize
1297 \begin{description}
1298 \item[gref] Grant reference for the network buffer
1299 \item[offset] Offset to data
1300 \item[flags] Transmit flags (currently only NETTXF\_csum\_blank is
1301 supported, to indicate that the protocol checksum field is
1302 incomplete).
1303 \item[id] Echoed to guest by the backend in the ring-level response so
1304 that the guest can match it to this request
1305 \item[size] Buffer size
1306 \end{description}
1308 Each transmit request is followed by a transmit response at some later
1309 date. This is part of the shared-memory communication protocol and
1310 allows the guest to (potentially) retire internal structures related
1311 to the request. It does not imply a network-level response. This
1312 structure is as follows:
1314 \scriptsize
1315 \begin{verbatim}
1316 typedef struct netif_tx_response {
1317 uint16_t id;
1318 int16_t status;
1319 } netif_tx_response_t;
1320 \end{verbatim}
1321 \normalsize
1323 \begin{description}
1324 \item[id] Echo of the ID field in the corresponding transmit request.
1325 \item[status] Success / failure status of the transmit request.
1326 \end{description}
1328 Receive requests must be queued by the frontend, accompanied by a
1329 donation of page-frames to the backend. The backend transfers page
1330 frames full of data back to the guest
1332 \scriptsize
1333 \begin{verbatim}
1334 typedef struct {
1335 uint16_t id; /* Echoed in response message. */
1336 grant_ref_t gref; /* Reference to incoming granted frame */
1337 } netif_rx_request_t;
1338 \end{verbatim}
1339 \normalsize
1341 \begin{description}
1342 \item[id] Echoed by the frontend to identify this request when
1343 responding.
1344 \item[gref] Transfer reference - the backend will use this reference
1345 to transfer a frame of network data to us.
1346 \end{description}
1348 Receive response descriptors are queued for each received frame. Note
1349 that these may only be queued in reply to an existing receive request,
1350 providing an in-built form of traffic throttling.
1352 \scriptsize
1353 \begin{verbatim}
1354 typedef struct {
1355 uint16_t id;
1356 uint16_t offset; /* Offset in page of start of received packet */
1357 uint16_t flags; /* NETRXF_* */
1358 int16_t status; /* -ve: BLKIF_RSP_* ; +ve: Rx'ed pkt size. */
1359 } netif_rx_response_t;
1360 \end{verbatim}
1361 \normalsize
1363 \begin{description}
1364 \item[id] ID echoed from the original request, used by the guest to
1365 match this response to the original request.
1366 \item[offset] Offset to data within the transferred frame.
1367 \item[flags] Transmit flags (currently only NETRXF\_csum\_valid is
1368 supported, to indicate that the protocol checksum field has already
1369 been validated).
1370 \item[status] Success / error status for this operation.
1371 \end{description}
1373 Note that the receive protocol includes a mechanism for guests to
1374 receive incoming memory frames but there is no explicit transfer of
1375 frames in the other direction. Guests are expected to return memory
1376 to the hypervisor in order to use the network interface. They {\em
1377 must} do this or they will exceed their maximum memory reservation and
1378 will not be able to receive incoming frame transfers. When necessary,
1379 the backend is able to replenish its pool of free network buffers by
1380 claiming some of this free memory from the hypervisor.
1382 \section{Block I/O}
1384 All guest OS disk access goes through the virtual block device VBD
1385 interface. This interface allows domains access to portions of block
1386 storage devices visible to the the block backend device. The VBD
1387 interface is a split driver, similar to the network interface
1388 described above. A single shared memory ring is used between the
1389 frontend and backend drivers for each virtual device, across which
1390 IO requests and responses are sent.
1392 Any block device accessible to the backend domain, including
1393 network-based block (iSCSI, *NBD, etc), loopback and LVM/MD devices,
1394 can be exported as a VBD. Each VBD is mapped to a device node in the
1395 guest, specified in the guest's startup configuration.
1397 \subsection{Data Transfer}
1399 The per-(virtual)-device ring between the guest and the block backend
1400 supports two messages:
1402 \begin{description}
1403 \item [{\small {\tt READ}}:] Read data from the specified block
1404 device. The front end identifies the device and location to read
1405 from and attaches pages for the data to be copied to (typically via
1406 DMA from the device). The backend acknowledges completed read
1407 requests as they finish.
1409 \item [{\small {\tt WRITE}}:] Write data to the specified block
1410 device. This functions essentially as {\small {\tt READ}}, except
1411 that the data moves to the device instead of from it.
1412 \end{description}
1414 %% Rather than copying data, the backend simply maps the domain's
1415 %% buffers in order to enable direct DMA to them. The act of mapping
1416 %% the buffers also increases the reference counts of the underlying
1417 %% pages, so that the unprivileged domain cannot try to return them to
1418 %% the hypervisor, install them as page tables, or any other unsafe
1419 %% behaviour.
1420 %%
1421 %% % block API here
1423 \subsection{Block ring interface}
1425 The block interface is defined by the structures passed over the
1426 shared memory interface. These structures are either requests (from
1427 the frontend to the backend) or responses (from the backend to the
1428 frontend).
1430 The request structure is defined as follows:
1432 \scriptsize
1433 \begin{verbatim}
1434 typedef struct blkif_request {
1435 uint8_t operation; /* BLKIF_OP_??? */
1436 uint8_t nr_segments; /* number of segments */
1437 blkif_vdev_t handle; /* only for read/write requests */
1438 uint64_t id; /* private guest value, echoed in resp */
1439 blkif_sector_t sector_number;/* start sector idx on disk (r/w only) */
1440 struct blkif_request_segment {
1441 grant_ref_t gref; /* reference to I/O buffer frame */
1442 /* @first_sect: first sector in frame to transfer (inclusive). */
1443 /* @last_sect: last sector in frame to transfer (inclusive). */
1444 uint8_t first_sect, last_sect;
1445 } seg[BLKIF_MAX_SEGMENTS_PER_REQUEST];
1446 } blkif_request_t;
1447 \end{verbatim}
1448 \normalsize
1450 The fields are as follows:
1452 \begin{description}
1453 \item[operation] operation ID: one of the operations described above
1454 \item[nr\_segments] number of segments for scatter / gather IO
1455 described by this request
1456 \item[handle] identifier for a particular virtual device on this
1457 interface
1458 \item[id] this value is echoed in the response message for this IO;
1459 the guest may use it to identify the original request
1460 \item[sector\_number] start sector on the virtal device for this
1461 request
1462 \item[frame\_and\_sects] This array contains structures encoding
1463 scatter-gather IO to be performed:
1464 \begin{description}
1465 \item[gref] The grant reference for the foreign I/O buffer page.
1466 \item[first\_sect] First sector to access within the buffer page (0 to 7).
1467 \item[last\_sect] Last sector to access within the buffer page (0 to 7).
1468 \end{description}
1469 Data will be transferred into frames at an offset determined by the
1470 value of {\tt first\_sect}.
1471 \end{description}
1473 \section{Virtual TPM}
1475 Virtual TPM (VTPM) support provides TPM functionality to each virtual
1476 machine that requests this functionality in its configuration file.
1477 The interface enables domains to access therr own private TPM like it
1478 was a hardware TPM built into the machine.
1480 The virtual TPM interface is implemented as a split driver,
1481 similar to the network and block interfaces described above.
1482 The user domain hosting the frontend exports a character device /dev/tpm0
1483 to user-level applications for communicating with the virtual TPM.
1484 This is the same device interface that is also offered if a hardware TPM
1485 is available in the system. The backend provides a single interface
1486 /dev/vtpm where the virtual TPM is waiting for commands from all domains
1487 that have located their backend in a given domain.
1489 \subsection{Data Transfer}
1491 A single shared memory ring is used between the frontend and backend
1492 drivers. TPM requests and responses are sent in pages where a pointer
1493 to those pages and other information is placed into the ring such that
1494 the backend can map the pages into its memory space using the grant
1495 table mechanism.
1497 The backend driver has been implemented to only accept well-formed
1498 TPM requests. To meet this requirement, the length inidicator in the
1499 TPM request must correctly indicate the length of the request.
1500 Otherwise an error message is automatically sent back by the device driver.
1502 The virtual TPM implementation listenes for TPM request on /dev/vtpm. Since
1503 it must be able to apply the TPM request packet to the virtual TPM instance
1504 associated with the virtual machine, a 4-byte virtual TPM instance
1505 identifier is prepended to each packet by the backend driver (in network
1506 byte order) for internal routing of the request.
1508 \subsection{Virtual TPM ring interface}
1510 The TPM protocol is a strict request/response protocol and therefore
1511 only one ring is used to send requests from the frontend to the backend
1512 and responses on the reverse path.
1514 The request/response structure is defined as follows:
1516 \scriptsize
1517 \begin{verbatim}
1518 typedef struct {
1519 unsigned long addr; /* Machine address of packet. */
1520 grant_ref_t ref; /* grant table access reference. */
1521 uint16_t unused; /* unused */
1522 uint16_t size; /* Packet size in bytes. */
1523 } tpmif_tx_request_t;
1524 \end{verbatim}
1525 \normalsize
1527 The fields are as follows:
1529 \begin{description}
1530 \item[addr] The machine address of the page asscoiated with the TPM
1531 request/response; a request/response may span multiple
1532 pages
1533 \item[ref] The grant table reference associated with the address.
1534 \item[size] The size of the remaining packet; up to
1535 PAGE{\textunderscore}SIZE bytes can be found in the
1536 page referenced by 'addr'
1537 \end{description}
1539 The frontend initially allocates several pages whose addresses
1540 are stored in the ring. Only these pages are used for exchange of
1541 requests and responses.
1544 \chapter{Further Information}
1546 If you have questions that are not answered by this manual, the
1547 sources of information listed below may be of interest to you. Note
1548 that bug reports, suggestions and contributions related to the
1549 software (or the documentation) should be sent to the Xen developers'
1550 mailing list (address below).
1553 \section{Other documentation}
1555 If you are mainly interested in using (rather than developing for)
1556 Xen, the \emph{Xen Users' Manual} is distributed in the {\tt docs/}
1557 directory of the Xen source distribution.
1559 % Various HOWTOs are also available in {\tt docs/HOWTOS}.
1562 \section{Online references}
1564 The official Xen web site can be found at:
1565 \begin{quote} {\tt http://www.xensource.com}
1566 \end{quote}
1569 This contains links to the latest versions of all online
1570 documentation, including the latest version of the FAQ.
1572 Information regarding Xen is also available at the Xen Wiki at
1573 \begin{quote} {\tt http://wiki.xensource.com/xenwiki/}\end{quote}
1574 The Xen project uses Bugzilla as its bug tracking system. You'll find
1575 the Xen Bugzilla at http://bugzilla.xensource.com/bugzilla/.
1578 \section{Mailing lists}
1580 There are several mailing lists that are used to discuss Xen related
1581 topics. The most widely relevant are listed below. An official page of
1582 mailing lists and subscription information can be found at \begin{quote}
1583 {\tt http://lists.xensource.com/} \end{quote}
1585 \begin{description}
1586 \item[xen-devel@lists.xensource.com] Used for development
1587 discussions and bug reports. Subscribe at: \\
1588 {\small {\tt http://lists.xensource.com/xen-devel}}
1589 \item[xen-users@lists.xensource.com] Used for installation and usage
1590 discussions and requests for help. Subscribe at: \\
1591 {\small {\tt http://lists.xensource.com/xen-users}}
1592 \item[xen-announce@lists.xensource.com] Used for announcements only.
1593 Subscribe at: \\
1594 {\small {\tt http://lists.xensource.com/xen-announce}}
1595 \item[xen-changelog@lists.xensource.com] Changelog feed
1596 from the unstable and 2.0 trees - developer oriented. Subscribe at: \\
1597 {\small {\tt http://lists.xensource.com/xen-changelog}}
1598 \end{description}
1600 \appendix
1603 \chapter{Xen Hypercalls}
1604 \label{a:hypercalls}
1606 Hypercalls represent the procedural interface to Xen; this appendix
1607 categorizes and describes the current set of hypercalls.
1609 \section{Invoking Hypercalls}
1611 Hypercalls are invoked in a manner analogous to system calls in a
1612 conventional operating system; a software interrupt is issued which
1613 vectors to an entry point within Xen. On x86/32 machines the
1614 instruction required is {\tt int \$82}; the (real) IDT is setup so
1615 that this may only be issued from within ring 1. The particular
1616 hypercall to be invoked is contained in {\tt EAX} --- a list
1617 mapping these values to symbolic hypercall names can be found
1618 in {\tt xen/include/public/xen.h}.
1620 On some occasions a set of hypercalls will be required to carry
1621 out a higher-level function; a good example is when a guest
1622 operating wishes to context switch to a new process which
1623 requires updating various privileged CPU state. As an optimization
1624 for these cases, there is a generic mechanism to issue a set of
1625 hypercalls as a batch:
1627 \begin{quote}
1628 \hypercall{multicall(void *call\_list, int nr\_calls)}
1630 Execute a series of hypervisor calls; {\tt nr\_calls} is the length of
1631 the array of {\tt multicall\_entry\_t} structures pointed to be {\tt
1632 call\_list}. Each entry contains the hypercall operation code followed
1633 by up to 7 word-sized arguments.
1634 \end{quote}
1636 Note that multicalls are provided purely as an optimization; there is
1637 no requirement to use them when first porting a guest operating
1638 system.
1641 \section{Virtual CPU Setup}
1643 At start of day, a guest operating system needs to setup the virtual
1644 CPU it is executing on. This includes installing vectors for the
1645 virtual IDT so that the guest OS can handle interrupts, page faults,
1646 etc. However the very first thing a guest OS must setup is a pair
1647 of hypervisor callbacks: these are the entry points which Xen will
1648 use when it wishes to notify the guest OS of an occurrence.
1650 \begin{quote}
1651 \hypercall{set\_callbacks(unsigned long event\_selector, unsigned long
1652 event\_address, unsigned long failsafe\_selector, unsigned long
1653 failsafe\_address) }
1655 Register the normal (``event'') and failsafe callbacks for
1656 event processing. In each case the code segment selector and
1657 address within that segment are provided. The selectors must
1658 have RPL 1; in XenLinux we simply use the kernel's CS for both
1659 {\bf event\_selector} and {\bf failsafe\_selector}.
1661 The value {\bf event\_address} specifies the address of the guest OSes
1662 event handling and dispatch routine; the {\bf failsafe\_address}
1663 specifies a separate entry point which is used only if a fault occurs
1664 when Xen attempts to use the normal callback.
1666 \end{quote}
1668 On x86/64 systems the hypercall takes slightly different
1669 arguments. This is because callback CS does not need to be specified
1670 (since teh callbacks are entered via SYSRET), and also because an
1671 entry address needs to be specified for SYSCALLs from guest user
1672 space:
1674 \begin{quote}
1675 \hypercall{set\_callbacks(unsigned long event\_address, unsigned long
1676 failsafe\_address, unsigned long syscall\_address)}
1677 \end{quote}
1680 After installing the hypervisor callbacks, the guest OS can
1681 install a `virtual IDT' by using the following hypercall:
1683 \begin{quote}
1684 \hypercall{set\_trap\_table(trap\_info\_t *table)}
1686 Install one or more entries into the per-domain
1687 trap handler table (essentially a software version of the IDT).
1688 Each entry in the array pointed to by {\bf table} includes the
1689 exception vector number with the corresponding segment selector
1690 and entry point. Most guest OSes can use the same handlers on
1691 Xen as when running on the real hardware.
1694 \end{quote}
1696 A further hypercall is provided for the management of virtual CPUs:
1698 \begin{quote}
1699 \hypercall{vcpu\_op(int cmd, int vcpuid, void *extra\_args)}
1701 This hypercall can be used to bootstrap VCPUs, to bring them up and
1702 down and to test their current status.
1704 \end{quote}
1706 \section{Scheduling and Timer}
1708 Domains are preemptively scheduled by Xen according to the
1709 parameters installed by domain 0 (see Section~\ref{s:dom0ops}).
1710 In addition, however, a domain may choose to explicitly
1711 control certain behavior with the following hypercall:
1713 \begin{quote}
1714 \hypercall{sched\_op\_new(int cmd, void *extra\_args)}
1716 Request scheduling operation from hypervisor. The following
1717 sub-commands are available:
1719 \begin{description}
1720 \item[SCHEDOP\_yield] voluntarily yields the CPU, but leaves the
1721 caller marked as runnable. No extra arguments are passed to this
1722 command.
1723 \item[SCHEDOP\_block] removes the calling domain from the run queue
1724 and causes it to sleep until an event is delivered to it. No extra
1725 arguments are passed to this command.
1726 \item[SCHEDOP\_shutdown] is used to end the calling domain's
1727 execution. The extra argument is a {\bf sched\_shutdown} structure
1728 which indicates the reason why the domain suspended (e.g., for reboot,
1729 halt, power-off).
1730 \item[SCHEDOP\_poll] allows a VCPU to wait on a set of event channels
1731 with an optional timeout (all of which are specified in the {\bf
1732 sched\_poll} extra argument). The semantics are similar to the UNIX
1733 {\bf poll} system call. The caller must have event-channel upcalls
1734 masked when executing this command.
1735 \end{description}
1736 \end{quote}
1738 {\bf sched\_op\_new} was not available prior to Xen 3.0.2. Older versions
1739 provide only the following hypercall:
1741 \begin{quote}
1742 \hypercall{sched\_op(int cmd, unsigned long extra\_arg)}
1744 This hypercall supports the following subset of {\bf sched\_op\_new} commands:
1746 \begin{description}
1747 \item[SCHEDOP\_yield] (extra argument is 0).
1748 \item[SCHEDOP\_block] (extra argument is 0).
1749 \item[SCHEDOP\_shutdown] (extra argument is numeric reason code).
1750 \end{description}
1751 \end{quote}
1753 To aid the implementation of a process scheduler within a guest OS,
1754 Xen provides a virtual programmable timer:
1756 \begin{quote}
1757 \hypercall{set\_timer\_op(uint64\_t timeout)}
1759 Request a timer event to be sent at the specified system time (time
1760 in nanoseconds since system boot).
1762 \end{quote}
1764 Note that calling {\bf set\_timer\_op} prior to {\bf sched\_op}
1765 allows block-with-timeout semantics.
1768 \section{Page Table Management}
1770 Since guest operating systems have read-only access to their page
1771 tables, Xen must be involved when making any changes. The following
1772 multi-purpose hypercall can be used to modify page-table entries,
1773 update the machine-to-physical mapping table, flush the TLB, install
1774 a new page-table base pointer, and more.
1776 \begin{quote}
1777 \hypercall{mmu\_update(mmu\_update\_t *req, int count, int *success\_count)}
1779 Update the page table for the domain; a set of {\bf count} updates are
1780 submitted for processing in a batch, with {\bf success\_count} being
1781 updated to report the number of successful updates.
1783 Each element of {\bf req[]} contains a pointer (address) and value;
1784 the least significant 2-bits of the pointer are used to distinguish
1785 the type of update requested as follows:
1786 \begin{description}
1788 \item[MMU\_NORMAL\_PT\_UPDATE:] update a page directory entry or
1789 page table entry to the associated value; Xen will check that the
1790 update is safe, as described in Chapter~\ref{c:memory}.
1792 \item[MMU\_MACHPHYS\_UPDATE:] update an entry in the
1793 machine-to-physical table. The calling domain must own the machine
1794 page in question (or be privileged).
1795 \end{description}
1797 \end{quote}
1799 Explicitly updating batches of page table entries is extremely
1800 efficient, but can require a number of alterations to the guest
1801 OS. Using the writable page table mode (Chapter~\ref{c:memory}) is
1802 recommended for new OS ports.
1804 Regardless of which page table update mode is being used, however,
1805 there are some occasions (notably handling a demand page fault) where
1806 a guest OS will wish to modify exactly one PTE rather than a
1807 batch, and where that PTE is mapped into the current address space.
1808 This is catered for by the following:
1810 \begin{quote}
1811 \hypercall{update\_va\_mapping(unsigned long va, uint64\_t val,
1812 unsigned long flags)}
1814 Update the currently installed PTE that maps virtual address {\bf va}
1815 to new value {\bf val}. As with {\bf mmu\_update}, Xen checks the
1816 modification is safe before applying it. The {\bf flags} determine
1817 which kind of TLB flush, if any, should follow the update.
1819 \end{quote}
1821 Finally, sufficiently privileged domains may occasionally wish to manipulate
1822 the pages of others:
1824 \begin{quote}
1825 \hypercall{update\_va\_mapping(unsigned long va, uint64\_t val,
1826 unsigned long flags, domid\_t domid)}
1828 Identical to {\bf update\_va\_mapping} save that the pages being
1829 mapped must belong to the domain {\bf domid}.
1831 \end{quote}
1833 An additional MMU hypercall provides an ``extended command''
1834 interface. This provides additional functionality beyond the basic
1835 table updating commands:
1837 \begin{quote}
1839 \hypercall{mmuext\_op(struct mmuext\_op *op, int count, int *success\_count, domid\_t domid)}
1841 This hypercall is used to perform additional MMU operations. These
1842 include updating {\tt cr3} (or just re-installing it for a TLB flush),
1843 requesting various kinds of TLB flush, flushing the cache, installing
1844 a new LDT, or pinning \& unpinning page-table pages (to ensure their
1845 reference count doesn't drop to zero which would require a
1846 revalidation of all entries). Some of the operations available are
1847 restricted to domains with sufficient system privileges.
1849 It is also possible for privileged domains to reassign page ownership
1850 via an extended MMU operation, although grant tables are used instead
1851 of this where possible; see Section~\ref{s:idc}.
1853 \end{quote}
1855 Finally, a hypercall interface is exposed to activate and deactivate
1856 various optional facilities provided by Xen for memory management.
1858 \begin{quote}
1859 \hypercall{vm\_assist(unsigned int cmd, unsigned int type)}
1861 Toggle various memory management modes (in particular writable page
1862 tables).
1864 \end{quote}
1866 \section{Segmentation Support}
1868 Xen allows guest OSes to install a custom GDT if they require it;
1869 this is context switched transparently whenever a domain is
1870 [de]scheduled. The following hypercall is effectively a
1871 `safe' version of {\tt lgdt}:
1873 \begin{quote}
1874 \hypercall{set\_gdt(unsigned long *frame\_list, int entries)}
1876 Install a global descriptor table for a domain; {\bf frame\_list} is
1877 an array of up to 16 machine page frames within which the GDT resides,
1878 with {\bf entries} being the actual number of descriptor-entry
1879 slots. All page frames must be mapped read-only within the guest's
1880 address space, and the table must be large enough to contain Xen's
1881 reserved entries (see {\bf xen/include/public/arch-x86\_32.h}).
1883 \end{quote}
1885 Many guest OSes will also wish to install LDTs; this is achieved by
1886 using {\bf mmu\_update} with an extended command, passing the
1887 linear address of the LDT base along with the number of entries. No
1888 special safety checks are required; Xen needs to perform this task
1889 simply since {\tt lldt} requires CPL 0.
1892 Xen also allows guest operating systems to update just an
1893 individual segment descriptor in the GDT or LDT:
1895 \begin{quote}
1896 \hypercall{update\_descriptor(uint64\_t ma, uint64\_t desc)}
1898 Update the GDT/LDT entry at machine address {\bf ma}; the new
1899 8-byte descriptor is stored in {\bf desc}.
1900 Xen performs a number of checks to ensure the descriptor is
1901 valid.
1903 \end{quote}
1905 Guest OSes can use the above in place of context switching entire
1906 LDTs (or the GDT) when the number of changing descriptors is small.
1908 \section{Context Switching}
1910 When a guest OS wishes to context switch between two processes,
1911 it can use the page table and segmentation hypercalls described
1912 above to perform the the bulk of the privileged work. In addition,
1913 however, it will need to invoke Xen to switch the kernel (ring 1)
1914 stack pointer:
1916 \begin{quote}
1917 \hypercall{stack\_switch(unsigned long ss, unsigned long esp)}
1919 Request kernel stack switch from hypervisor; {\bf ss} is the new
1920 stack segment, which {\bf esp} is the new stack pointer.
1922 \end{quote}
1924 A useful hypercall for context switching allows ``lazy'' save and
1925 restore of floating point state:
1927 \begin{quote}
1928 \hypercall{fpu\_taskswitch(int set)}
1930 This call instructs Xen to set the {\tt TS} bit in the {\tt cr0}
1931 control register; this means that the next attempt to use floating
1932 point will cause a trap which the guest OS can trap. Typically it will
1933 then save/restore the FP state, and clear the {\tt TS} bit, using the
1934 same call.
1935 \end{quote}
1937 This is provided as an optimization only; guest OSes can also choose
1938 to save and restore FP state on all context switches for simplicity.
1940 Finally, a hypercall is provided for entering vm86 mode:
1942 \begin{quote}
1943 \hypercall{switch\_vm86}
1945 This allows the guest to run code in vm86 mode, which is needed for
1946 some legacy software.
1947 \end{quote}
1949 \section{Physical Memory Management}
1951 As mentioned previously, each domain has a maximum and current
1952 memory allocation. The maximum allocation, set at domain creation
1953 time, cannot be modified. However a domain can choose to reduce
1954 and subsequently grow its current allocation by using the
1955 following call:
1957 \begin{quote}
1958 \hypercall{memory\_op(unsigned int op, void *arg)}
1960 Increase or decrease current memory allocation (as determined by
1961 the value of {\bf op}). The available operations are:
1963 \begin{description}
1964 \item[XENMEM\_increase\_reservation] Request an increase in machine
1965 memory allocation; {\bf arg} must point to a {\bf
1966 xen\_memory\_reservation} structure.
1967 \item[XENMEM\_decrease\_reservation] Request a decrease in machine
1968 memory allocation; {\bf arg} must point to a {\bf
1969 xen\_memory\_reservation} structure.
1970 \item[XENMEM\_maximum\_ram\_page] Request the frame number of the
1971 highest-addressed frame of machine memory in the system. {\bf arg}
1972 must point to an {\bf unsigned long} where this value will be
1973 stored.
1974 \item[XENMEM\_current\_reservation] Returns current memory reservation
1975 of the specified domain.
1976 \item[XENMEM\_maximum\_reservation] Returns maximum memory resrevation
1977 of the specified domain.
1978 \end{description}
1980 \end{quote}
1982 In addition to simply reducing or increasing the current memory
1983 allocation via a `balloon driver', this call is also useful for
1984 obtaining contiguous regions of machine memory when required (e.g.
1985 for certain PCI devices, or if using superpages).
1988 \section{Inter-Domain Communication}
1989 \label{s:idc}
1991 Xen provides a simple asynchronous notification mechanism via
1992 \emph{event channels}. Each domain has a set of end-points (or
1993 \emph{ports}) which may be bound to an event source (e.g. a physical
1994 IRQ, a virtual IRQ, or an port in another domain). When a pair of
1995 end-points in two different domains are bound together, then a `send'
1996 operation on one will cause an event to be received by the destination
1997 domain.
1999 The control and use of event channels involves the following hypercall:
2001 \begin{quote}
2002 \hypercall{event\_channel\_op(evtchn\_op\_t *op)}
2004 Inter-domain event-channel management; {\bf op} is a discriminated
2005 union which allows the following 7 operations:
2007 \begin{description}
2009 \item[alloc\_unbound:] allocate a free (unbound) local
2010 port and prepare for connection from a specified domain.
2011 \item[bind\_virq:] bind a local port to a virtual
2012 IRQ; any particular VIRQ can be bound to at most one port per domain.
2013 \item[bind\_pirq:] bind a local port to a physical IRQ;
2014 once more, a given pIRQ can be bound to at most one port per
2015 domain. Furthermore the calling domain must be sufficiently
2016 privileged.
2017 \item[bind\_interdomain:] construct an interdomain event
2018 channel; in general, the target domain must have previously allocated
2019 an unbound port for this channel, although this can be bypassed by
2020 privileged domains during domain setup.
2021 \item[close:] close an interdomain event channel.
2022 \item[send:] send an event to the remote end of a
2023 interdomain event channel.
2024 \item[status:] determine the current status of a local port.
2025 \end{description}
2027 For more details see
2028 {\bf xen/include/public/event\_channel.h}.
2030 \end{quote}
2032 Event channels are the fundamental communication primitive between
2033 Xen domains and seamlessly support SMP. However they provide little
2034 bandwidth for communication {\sl per se}, and hence are typically
2035 married with a piece of shared memory to produce effective and
2036 high-performance inter-domain communication.
2038 Safe sharing of memory pages between guest OSes is carried out by
2039 granting access on a per page basis to individual domains. This is
2040 achieved by using the {\tt grant\_table\_op} hypercall.
2042 \begin{quote}
2043 \hypercall{grant\_table\_op(unsigned int cmd, void *uop, unsigned int count)}
2045 Used to invoke operations on a grant reference, to setup the grant
2046 table and to dump the tables' contents for debugging.
2048 \end{quote}
2050 \section{IO Configuration}
2052 Domains with physical device access (i.e.\ driver domains) receive
2053 limited access to certain PCI devices (bus address space and
2054 interrupts). However many guest operating systems attempt to
2055 determine the PCI configuration by directly access the PCI BIOS,
2056 which cannot be allowed for safety.
2058 Instead, Xen provides the following hypercall:
2060 \begin{quote}
2061 \hypercall{physdev\_op(void *physdev\_op)}
2063 Set and query IRQ configuration details, set the system IOPL, set the
2064 TSS IO bitmap.
2066 \end{quote}
2069 For examples of using {\tt physdev\_op}, see the
2070 Xen-specific PCI code in the linux sparse tree.
2072 \section{Administrative Operations}
2073 \label{s:dom0ops}
2075 A large number of control operations are available to a sufficiently
2076 privileged domain (typically domain 0). These allow the creation and
2077 management of new domains, for example. A complete list is given
2078 below: for more details on any or all of these, please see
2079 {\tt xen/include/public/dom0\_ops.h}
2082 \begin{quote}
2083 \hypercall{dom0\_op(dom0\_op\_t *op)}
2085 Administrative domain operations for domain management. The options are:
2087 \begin{description}
2088 \item [DOM0\_GETMEMLIST:] get list of pages used by the domain
2090 \item [DOM0\_SCHEDCTL:]
2092 \item [DOM0\_ADJUSTDOM:] adjust scheduling priorities for domain
2094 \item [DOM0\_CREATEDOMAIN:] create a new domain
2096 \item [DOM0\_DESTROYDOMAIN:] deallocate all resources associated
2097 with a domain
2099 \item [DOM0\_PAUSEDOMAIN:] remove a domain from the scheduler run
2100 queue.
2102 \item [DOM0\_UNPAUSEDOMAIN:] mark a paused domain as schedulable
2103 once again.
2105 \item [DOM0\_GETDOMAININFO:] get statistics about the domain
2107 \item [DOM0\_SETDOMAININFO:] set VCPU-related attributes
2109 \item [DOM0\_MSR:] read or write model specific registers
2111 \item [DOM0\_DEBUG:] interactively invoke the debugger
2113 \item [DOM0\_SETTIME:] set system time
2115 \item [DOM0\_GETPAGEFRAMEINFO:]
2117 \item [DOM0\_READCONSOLE:] read console content from hypervisor buffer ring
2119 \item [DOM0\_PINCPUDOMAIN:] pin domain to a particular CPU
2121 \item [DOM0\_TBUFCONTROL:] get and set trace buffer attributes
2123 \item [DOM0\_PHYSINFO:] get information about the host machine
2125 \item [DOM0\_SCHED\_ID:] get the ID of the current Xen scheduler
2127 \item [DOM0\_SHADOW\_CONTROL:] switch between shadow page-table modes
2129 \item [DOM0\_SETDOMAINMAXMEM:] set maximum memory allocation of a domain
2131 \item [DOM0\_GETPAGEFRAMEINFO2:] batched interface for getting
2132 page frame info
2134 \item [DOM0\_ADD\_MEMTYPE:] set MTRRs
2136 \item [DOM0\_DEL\_MEMTYPE:] remove a memory type range
2138 \item [DOM0\_READ\_MEMTYPE:] read MTRR
2140 \item [DOM0\_PERFCCONTROL:] control Xen's software performance
2141 counters
2143 \item [DOM0\_MICROCODE:] update CPU microcode
2145 \item [DOM0\_IOPORT\_PERMISSION:] modify domain permissions for an
2146 IO port range (enable / disable a range for a particular domain)
2148 \item [DOM0\_GETVCPUCONTEXT:] get context from a VCPU
2150 \item [DOM0\_GETVCPUINFO:] get current state for a VCPU
2151 \item [DOM0\_GETDOMAININFOLIST:] batched interface to get domain
2152 info
2154 \item [DOM0\_PLATFORM\_QUIRK:] inform Xen of a platform quirk it
2155 needs to handle (e.g. noirqbalance)
2157 \item [DOM0\_PHYSICAL\_MEMORY\_MAP:] get info about dom0's memory
2158 map
2160 \item [DOM0\_MAX\_VCPUS:] change max number of VCPUs for a domain
2162 \item [DOM0\_SETDOMAINHANDLE:] set the handle for a domain
2164 \end{description}
2165 \end{quote}
2167 Most of the above are best understood by looking at the code
2168 implementing them (in {\tt xen/common/dom0\_ops.c}) and in
2169 the user-space tools that use them (mostly in {\tt tools/libxc}).
2171 Hypercalls relating to the management of the Access Control Module are
2172 also restricted to domain 0 access for now:
2174 \begin{quote}
2176 \hypercall{acm\_op(struct acm\_op * u\_acm\_op)}
2178 This hypercall can be used to configure the state of the ACM, query
2179 that state, request access control decisions and dump additional
2180 information.
2182 \end{quote}
2185 \section{Debugging Hypercalls}
2187 A few additional hypercalls are mainly useful for debugging:
2189 \begin{quote}
2190 \hypercall{console\_io(int cmd, int count, char *str)}
2192 Use Xen to interact with the console; operations are:
2194 {CONSOLEIO\_write}: Output count characters from buffer str.
2196 {CONSOLEIO\_read}: Input at most count characters into buffer str.
2197 \end{quote}
2199 A pair of hypercalls allows access to the underlying debug registers:
2200 \begin{quote}
2201 \hypercall{set\_debugreg(int reg, unsigned long value)}
2203 Set debug register {\bf reg} to {\bf value}
2205 \hypercall{get\_debugreg(int reg)}
2207 Return the contents of the debug register {\bf reg}
2208 \end{quote}
2210 And finally:
2211 \begin{quote}
2212 \hypercall{xen\_version(int cmd)}
2214 Request Xen version number.
2215 \end{quote}
2217 This is useful to ensure that user-space tools are in sync
2218 with the underlying hypervisor.
2221 \end{document}