direct-io.hg
changeset 7811:bb0e5f7f94fd
Merged.
author | emellor@leeni.uk.xensource.com |
---|---|
date | Tue Nov 15 16:24:31 2005 +0100 (2005-11-15) |
parents | 60bf9aa39043 4bdcb7f8c3d7 |
children | 3918cc7f679e a064c5804eae |
files |
line diff
1.1 --- a/.hgignore Tue Nov 15 15:56:47 2005 +0100 1.2 +++ b/.hgignore Tue Nov 15 16:24:31 2005 +0100 1.3 @@ -148,6 +148,8 @@ 1.4 ^tools/vtpm_manager/manager/vtpm_managerd$ 1.5 ^tools/xcutils/xc_restore$ 1.6 ^tools/xcutils/xc_save$ 1.7 +^tools/xenmon/setmask$ 1.8 +^tools/xenmon/xenbaked$ 1.9 ^tools/xenstat/xentop/xentop$ 1.10 ^tools/xenstore/testsuite/tmp/.*$ 1.11 ^tools/xenstore/xen$
2.1 --- a/docs/man/xm.pod.1 Tue Nov 15 15:56:47 2005 +0100 2.2 +++ b/docs/man/xm.pod.1 Tue Nov 15 16:24:31 2005 +0100 2.3 @@ -15,9 +15,9 @@ VCPUs, and attach or detach virtual bloc 2.4 2.5 The basic structure of every xm command is almost always: 2.6 2.7 - xm <SubCommand> <DomId> [OPTIONS] 2.8 + xm <subcommand> <domain-id> [OPTIONS] 2.9 2.10 -Where I<SubCommand> is one of the sub commands listed below, I<DomId> 2.11 +Where I<subcommand> is one of the sub commands listed below, I<domain-id> 2.12 is the numeric domain id, or the domain name (which will be internally 2.13 translated to domain id), and I<OPTIONS> are sub command specific 2.14 options. There are a few exceptions to this rule in the cases where 2.15 @@ -46,13 +46,13 @@ actions has finished you must poll throu 2.16 =head1 DOMAIN SUBCOMMANDS 2.17 2.18 The following sub commands manipulate domains directly, as stated 2.19 -previously most commands take DomId as the first parameter. 2.20 +previously most commands take domain-id as the first parameter. 2.21 2.22 =over 4 2.23 2.24 -=item I<console> <DomId> 2.25 +=item B<console> I<domain-id> 2.26 2.27 -Attach to domain DomId's console. If you've set up your Domains to 2.28 +Attach to domain domain-id's console. If you've set up your Domains to 2.29 have a traditional log in console this will look much like a normal 2.30 text log in screen. 2.31 2.32 @@ -63,15 +63,15 @@ The attached console will perform much l 2.33 so running curses based interfaces over the console B<is not 2.34 advised>. Vi tends to get very odd when using it over this interface. 2.35 2.36 -=item I<create> [-c] <ConfigFile> [Name=Value].. 2.37 +=item B<create> I<[-c]> I<configfile> I<[name=value]>.. 2.38 2.39 -The create sub command requires a ConfigFile and can optional take a 2.40 +The create sub command requires a configfile and can optional take a 2.41 series of name value pairs that add to or override variables defined 2.42 in the config file. See L<xmdomain.cfg> for full details of that file 2.43 -format, and possible options used in either the ConfigFile or 2.44 +format, and possible options used in either the configfile or 2.45 Name=Value combinations. 2.46 2.47 -ConfigFile can either be an absolute path to a file, or a relative 2.48 +Configfile can either be an absolute path to a file, or a relative 2.49 path to a file located in /etc/xen. 2.50 2.51 Create will return B<as soon> as the domain is started. This B<does 2.52 @@ -82,7 +82,7 @@ B<OPTIONS> 2.53 2.54 =over 4 2.55 2.56 -=item I<-c> 2.57 +=item B<-c> 2.58 2.59 Attache console to the domain as soon as it has started. This is 2.60 useful for determining issues with crashing domains. 2.61 @@ -114,42 +114,42 @@ virtual networking. (This example comes 2.62 2.63 =back 2.64 2.65 -=item I<destroy> <DomId> 2.66 +=item B<destroy> I<domain-id> 2.67 2.68 -Immediately terminate the domain DomId. This doesn't give the domain 2.69 +Immediately terminate the domain domain-id. This doesn't give the domain 2.70 OS any chance to react, and it the equivalent of ripping the power 2.71 cord out on a physical machine. In most cases you will want to use 2.72 the B<shutdown> command instead. 2.73 2.74 -=item I<domid> <DomName> 2.75 +=item B<domid> I<domain-name> 2.76 2.77 Converts a domain name to a domain id using xend's internal mapping. 2.78 2.79 -=item I<domname> <DomId> 2.80 +=item B<domname> I<domain-id> 2.81 2.82 Converts a domain id to a domain name using xend's internal mapping. 2.83 2.84 -=item I<help> [--long] 2.85 +=item B<help> I<[--long]> 2.86 2.87 Displays the short help message (i.e. common commands). 2.88 2.89 The I<--long> option prints out the complete set of B<xm> subcommands, 2.90 grouped by function. 2.91 2.92 -=item I<list> [--long] [DomId, ...] 2.93 +=item B<list> I<[--long]> I<[domain-id, ...]> 2.94 2.95 Prints information about one or more domains. If no domains are 2.96 specified it prints out information about all domains. 2.97 2.98 An example format for the list is as follows: 2.99 2.100 - Name ID Mem(MiB) VCPUs State Time(s) 2.101 - Domain-0 0 98 1 r----- 5068.6 2.102 - Fedora3 164 128 1 r----- 7.6 2.103 - Fedora4 165 128 1 ------ 0.6 2.104 - Mandrake2006 166 128 1 -b---- 3.6 2.105 - Mandrake10.2 167 128 1 ------ 2.5 2.106 - Suse9.2 168 100 1 ------ 1.8 2.107 + Name ID Mem(MiB) VCPUs State Time(s) 2.108 + Domain-0 0 98 1 r----- 5068.6 2.109 + Fedora3 164 128 1 r----- 7.6 2.110 + Fedora4 165 128 1 ------ 0.6 2.111 + Mandrake2006 166 128 1 -b---- 3.6 2.112 + Mandrake10.2 167 128 1 ------ 2.5 2.113 + Suse9.2 168 100 1 ------ 1.8 2.114 2.115 Name is the name of the domain. ID the domain numeric id. Mem is the 2.116 size of the memory allocated to the domain. VCPUS is the number of 2.117 @@ -163,34 +163,34 @@ B<STATES> 2.118 The State field lists 6 states for a Xen Domain, and which ones the 2.119 current Domain is in. 2.120 2.121 -=item I<r - running> 2.122 +=item B<r - running> 2.123 2.124 The domain is currently running on a CPU 2.125 2.126 -=item I<b - blocked> 2.127 +=item B<b - blocked> 2.128 2.129 The domain is blocked, and not running or runable. This can be caused 2.130 because the domain is waiting on IO (a traditional wait state) or has 2.131 gone to sleep because there was nothing else for it to do. 2.132 2.133 -=item I<p - paused> 2.134 +=item B<p - paused> 2.135 2.136 The domain has been paused, usually occurring through the administrator 2.137 running B<xm pause>. When in a paused state the domain will still 2.138 consume allocated resources like memory, but will not be eligible for 2.139 scheduling by the Xen hypervisor. 2.140 2.141 -=item I<s - shutdown> 2.142 +=item B<s - shutdown> 2.143 2.144 FIXME: Why would you ever see this state? 2.145 2.146 -=item I<c - crashed> 2.147 +=item B<c - crashed> 2.148 2.149 The domain has crashed, which is always a violent ending. Usually 2.150 this state can only occur if the domain has been configured not to 2.151 restart on crash. See L<xmdomain.cfg> for more info. 2.152 2.153 -=item I<d - dying> 2.154 +=item B<d - dying> 2.155 2.156 The domain is in process of dying, but hasn't completely shutdown or 2.157 crashed. 2.158 @@ -226,7 +226,7 @@ less utilized than a high CPU workload. 2.159 2.160 =back 2.161 2.162 -=item I<mem-max> <DomId> <Mem> 2.163 +=item B<mem-max> I<domain-id> I<mem> 2.164 2.165 Specify the maximum amount of memory the Domain is able to use. Mem 2.166 is specified in megabytes. 2.167 @@ -234,7 +234,7 @@ is specified in megabytes. 2.168 The mem-max value may not correspond to the actual memory used in the 2.169 Domain, as it may balloon down it's memory to give more back to the OS. 2.170 2.171 -=item I<mem-set> <DomId> <Mem> 2.172 +=item B<mem-set> I<domain-id> I<mem> 2.173 2.174 Set the domain's used memory using the balloon driver. Because this 2.175 operation requires cooperation from the domain operating system, there 2.176 @@ -244,7 +244,7 @@ B<Warning:> there is no good way to know 2.177 mem-set will make a domain unstable and cause it to crash. Be very 2.178 careful when using this command on running domains. 2.179 2.180 -=item I<migrate> <DomId> <Host> [Options] 2.181 +=item B<migrate> I<domain-id> I<host> I<[options]> 2.182 2.183 Migrate a domain to another Host machine. B<Xend> must be running on 2.184 other host machine, it must be running the same version of xen, it 2.185 @@ -261,13 +261,13 @@ B<OPTIONS> 2.186 2.187 =over 4 2.188 2.189 -=item I<-l, --live> 2.190 +=item B<-l, --live> 2.191 2.192 Use live migration. This will migrate the domain between hosts 2.193 without shutting down the domain. See the Xen Users Guide for more 2.194 information. 2.195 2.196 -=item I<-r, --resource> Mbs 2.197 +=item B<-r, --resource> I<Mbs> 2.198 2.199 Set maximum Mbs allowed for migrating the domain. This ensures that 2.200 the network link is not saturated with migration traffic while 2.201 @@ -275,13 +275,13 @@ attempting to do other useful work. 2.202 2.203 =back 2.204 2.205 -=item I<pause> <DomId> 2.206 +=item B<pause> I<domain-id> 2.207 2.208 Pause a domain. When in a paused state the domain will still consume 2.209 -allocated resources like memory, but will not be eligible for 2.210 +allocated resources such as memory, but will not be eligible for 2.211 scheduling by the Xen hypervisor. 2.212 2.213 -=item I<reboot> [Options] <DomId> 2.214 +=item B<reboot> I<[options]> I<domain-id> 2.215 2.216 Reboot a domain. This acts just as if the domain had the B<reboot> 2.217 command run from the console. The command returns as soon as it has 2.218 @@ -289,66 +289,104 @@ executed the reboot action, which may be 2.219 domain actually reboots. 2.220 2.221 The behavior of what happens to a domain when it reboots is set by the 2.222 -B<on_reboot> parameter of the xmdomain.cfg file when the domain was 2.223 +I<on_reboot> parameter of the xmdomain.cfg file when the domain was 2.224 created. 2.225 2.226 B<OPTIONS> 2.227 2.228 =over 4 2.229 2.230 -=item I<-a, --all> 2.231 +=item B<-a, --all> 2.232 2.233 Reboot all domains 2.234 2.235 -=item I<-w, --wait> 2.236 +=item B<-w, --wait> 2.237 2.238 Wait for reboot to complete before returning. This may take a while, 2.239 as all services in the domain will have to be shut down cleanly. 2.240 2.241 =back 2.242 2.243 -=item I<restore> <File> 2.244 +=item B<restore> I<state-file> 2.245 + 2.246 +Build a domain from an B<xm save> state file. See I<save> for more info. 2.247 2.248 -Create a domain from saved state File. 2.249 +=item B<save> I<domain-id> I<state-file> 2.250 2.251 -=item I<save> <DomId> <File> 2.252 +Saves a running domain to a state file so that it can be restored 2.253 +later. Once saved, the domain will no longer be running on the 2.254 +system, thus the memory allocated for the domain will be free for 2.255 +other domains to use. B<xm restore> restores from this state file. 2.256 2.257 -Save domain state to File. Saves domain configuration to File as well. 2.258 +This is roughly equivalent to doing a hibernate on a running computer, 2.259 +with all the same limitations. Open network connections may be 2.260 +severed upon restore, as TCP timeouts may have expired. 2.261 + 2.262 +=item B<shutdown> I<[options]> I<domain-id> 2.263 2.264 -=item I<shutdown> [Options] <DomId> 2.265 +Gracefully shuts down a domain. This coordinates with the domain OS 2.266 +to perform graceful shutdown, so there is no guaruntee that it will 2.267 +succeed, and may take a variable length of time depending on what 2.268 +services must be shutdown in the domain. The command returns 2.269 +immediately after signally the domain unless that I<-w> flag is used. 2.270 2.271 -Shutdown a domain. 2.272 +The behavior of what happens to a domain when it reboots is set by the 2.273 +I<on_shutdown> parameter of the xmdomain.cfg file when the domain was 2.274 +created. 2.275 + 2.276 +B<OPTIONS> 2.277 2.278 =over 4 2.279 2.280 -Additional Options: 2.281 +=item B<-a> 2.282 2.283 - -a, --all Shutdown all domains. 2.284 - -H, --halt Shutdown domain without reboot. 2.285 - -R, --reboot Shutdown and reboot domain. 2.286 - -w, --wait Wait for shutdown to complete. 2.287 +Shutdown B<all> domains. Often used when doing a complete shutdown of 2.288 +a Xen system. 2.289 + 2.290 +=item B<-w> 2.291 + 2.292 +Wait for the domain to complete shutdown before returning. 2.293 2.294 =back 2.295 2.296 -=item I<sysrq> <DomId> <letter> 2.297 +=item B<sysrq> I<domain-id> I<letter> 2.298 2.299 -Send a sysrq to a domain. 2.300 +Send a I<Magic System Request> signal to the domain. For more 2.301 +information on available magic sys req operations, see sysrq.txt in 2.302 +your Linux Kernel sources. 2.303 + 2.304 +=item B<unpause> I<domain-id> 2.305 2.306 -=item I<unpause> <DomId> 2.307 +Moves a domain out of the paused state. This will allow a previously 2.308 +paused domain to now be eligible for scheduling by the Xen hypervisor. 2.309 2.310 -Unpause a paused domain. 2.311 +=item B<set-vcpus> I<domain-id> I<vcpu-count> 2.312 2.313 -=item I<set-vcpus> <DomId> <VCPUs> 2.314 +Enables the I<vcpu-count> virtual CPUs for the domain in question. 2.315 +Like mem-set, this command can only allocate up to the maximum virtual 2.316 +CPU count configured at boot for the domain. 2.317 2.318 -Enable a specific number of VCPUs for a domain. Subcommand only enables or disables already configured VCPUs for domain. 2.319 +If the I<vcpu-count> is smaller than the current number of active 2.320 +VCPUs, the highest number VCPUs will be hotplug removed. This may be 2.321 +important for pinning purposes. 2.322 2.323 -=item I<vpcu-list> [DomID] 2.324 +Attempting to set-vcpus to a number larger than the initially 2.325 +configured VCPU count is an error. Trying to set-vcpus to < 1 will be 2.326 +quietly ignored. 2.327 + 2.328 +=item B<vpcu-list> I<[domain-id]> 2.329 2.330 -Lists VCPU information for a specific domain or all domains if DomID not given. 2.331 +Lists VCPU information for a specific domain. If no domain is 2.332 +specified, VCPU information for all domains will be provided. 2.333 + 2.334 +=item B<vcpu-pin> I<domain-id> I<vcpu> I<cpus> 2.335 2.336 -=item I<vcpu-pin> <DomId> <VCPU> <CPUs> 2.337 +Pins the the VCPU to only run on the specific CPUs. 2.338 2.339 -Sets VCPU to only run on specific CPUs. 2.340 +Normally VCPUs can float between available CPUs whenever Xen deems a 2.341 +different run state is appropriate. Pinning can be used to restrict 2.342 +this, by ensuring certain VCPUs can only run on certain physical 2.343 +CPUs. 2.344 2.345 =back 2.346 2.347 @@ -356,115 +394,286 @@ Sets VCPU to only run on specific CPUs. 2.348 2.349 =over 4 2.350 2.351 -=item I<dmesg> [OPTION] 2.352 +=item B<dmesg> I<[-c]> 2.353 2.354 -Read or clear Xen's message buffer. The buffer contains Xen boot, warning, and error messages. 2.355 +Reads the Xen message buffer, similar to dmesg on a Linux system. The 2.356 +buffer contains informational, warning, and error messages created 2.357 +during Xen's boot process. If you are having problems with Xen, this 2.358 +is one of the first places to look as part of problem determination. 2.359 + 2.360 +B<OPTIONS> 2.361 2.362 =over 4 2.363 2.364 -Additional Option: 2.365 +=item B<-c, --clear> 2.366 2.367 - -c, --clear Clears Xen's message buffer. 2.368 +Clears Xen's message buffer. 2.369 2.370 =back 2.371 2.372 -=item I<info> 2.373 +=item B<info> 2.374 + 2.375 +Print information about the Xen host in I<name : value> format. When 2.376 +reporting a Xen bug, please provide this information as part of the 2.377 +bug report. 2.378 + 2.379 +Sample xen domain info looks as follows (lines wrapped manually to 2.380 +make the man page more readable): 2.381 2.382 -Get information about Xen host. 2.383 + system : Linux 2.384 + host : talon 2.385 + release : 2.6.12.6-xen0 2.386 + version : #1 Mon Nov 14 14:26:26 EST 2005 2.387 + machine : i686 2.388 + nr_cpus : 2 2.389 + nr_nodes : 1 2.390 + sockets_per_node : 2 2.391 + cores_per_socket : 1 2.392 + threads_per_core : 1 2.393 + cpu_mhz : 696 2.394 + hw_caps : 0383fbff:00000000:00000000:00000040 2.395 + memory : 767 2.396 + free_memory : 37 2.397 + xen_major : 3 2.398 + xen_minor : 0 2.399 + xen_extra : -devel 2.400 + xen_caps : xen-3.0-x86_32 2.401 + xen_params : virt_start=0xfc000000 2.402 + xen_changeset : Mon Nov 14 18:13:38 2005 +0100 2.403 + 7793:090e44133d40 2.404 + cc_compiler : gcc version 3.4.3 (Mandrakelinux 2.405 + 10.2 3.4.3-7mdk) 2.406 + cc_compile_by : sdague 2.407 + cc_compile_domain : (none) 2.408 + cc_compile_date : Mon Nov 14 14:16:48 EST 2005 2.409 2.410 -=item I<log> 2.411 +B<FIELDS> 2.412 2.413 -Print B<xend> log. 2.414 +=over 4 2.415 + 2.416 +Not all fields will be explained here, but some of the less obvious 2.417 +ones deserve explanation: 2.418 + 2.419 +=item I<hw_caps> 2.420 + 2.421 +A vector showing what hardware capabilities are supported by your 2.422 +processor. This is equivalent to, though more cryptic, the flags 2.423 +field in /proc/cpuinfo on a normal Linux machine. 2.424 + 2.425 +=item I<free_memory> 2.426 + 2.427 +Available memory (in MB) not allocated to Xen, or any other Domains. 2.428 + 2.429 +=item I<xen_caps> 2.430 2.431 -=item I<top> 2.432 +The xen version, architecture. Architecture values can be one of: 2.433 +x86_32, x86_32p (i.e. PAE enabled), x86_64, ia64. 2.434 + 2.435 +=item I<xen_changeset> 2.436 + 2.437 +The xen mercurial changeset id. Very useful for determining exactly 2.438 +what version of code your Xen system was built from. 2.439 + 2.440 +=back 2.441 2.442 -Monitor system and domains in real-time. 2.443 +=item B<log> 2.444 + 2.445 +Print out the B<xend> log. This log file can be found in 2.446 +/var/log/xend.log. 2.447 + 2.448 +=item B<top> 2.449 + 2.450 +Executes the xentop command, which provides real time monitoring of 2.451 +domains. Xentop is a curses interface, and reasonably self 2.452 +explanatory. 2.453 2.454 =back 2.455 2.456 =head1 SCHEDULER SUBCOMMANDS 2.457 2.458 -=over 4 2.459 +Xen ships with a number of domain schedulers, which can be set at boot 2.460 +time with the I<sched=> parameter on the Xen command line. By 2.461 +default I<sedf> is used for scheduling. 2.462 2.463 -=item I<sched-bvt> <Parameters> 2.464 - 2.465 -Set Borrowed Virtual Time (BVT) scheduler parameters. There are five parameters, which are given in order below. 2.466 +FIXME: we really need a scheduler expert to write up this section. 2.467 2.468 =over 4 2.469 2.470 -Parameters: 2.471 +=item B<sched-bvt> I<mcuadv> I<warpback> I<warpvalue> I<warpl> I<warpu> 2.472 + 2.473 +Performs runtime adjustments to the default parameters for the 2.474 +Borrowed Virtual Time (BVT) scheduler. For full information on the 2.475 +BVT concept, please consult the base paper listed in the B<SEE ALSO> 2.476 +section. 2.477 + 2.478 +Set Borrowed Virtual Time (BVT) scheduler parameters. There are five 2.479 +required parameters, which are given in order below. 2.480 + 2.481 +FIXME: what units are all the BVT params in? 2.482 + 2.483 +B<PARAMETERS> 2.484 + 2.485 +=over 4 2.486 + 2.487 +=item I<mcuadv> 2.488 2.489 - mcuadv - Minimum Charging Unit (MCU) advance. 2.490 - warpback - Warp back time allowed. 2.491 - warpvalue - Warp value. 2.492 - warpl - Warp maximum limit. 2.493 - warpu - Unwarped minimum limit. 2.494 +The MCU (Minimum Charging Unit) advance determines the proportional 2.495 +share of the CPU that a domain receives. It is set inversely 2.496 +proportionally to a domain's sharing weight. 2.497 + 2.498 +=item I<warpback> 2.499 + 2.500 +The amount of `virtual time' the domain is allowed to warp backwards. 2.501 + 2.502 +=item I<warpvalue> 2.503 + 2.504 +Warp value (FIXME: what does this really mean?) 2.505 + 2.506 +=item I<warpl> 2.507 + 2.508 +The warp limit is the maximum time a domain can run warped for. 2.509 + 2.510 +=item I<warpu> 2.511 + 2.512 +The unwarp requirement is the minimum time a domain must run unwarped 2.513 +for before it can warp again. 2.514 2.515 =back 2.516 2.517 -=item I<sched-bvt-ctxallow> <Allow> 2.518 +=item B<sched-bvt-ctxallow> I<allow> 2.519 + 2.520 +Sets the BVT scheduler's context switch allowance. 2.521 2.522 -Sets the BVT scheduler's context switch allowance. Allow is the minimum time slice allowed to run before being pre-empted. 2.523 +The context switch allowance is similar to the ``quantum'' in 2.524 +traditional schedulers. It is the minimum time that a scheduled domain 2.525 +will be allowed to run before being preempted. 2.526 2.527 -=item I<sched-sedf> <Parameters> 2.528 +=item B<sched-sedf> I<period> I<slice> I<latency-hint> I<extratime> I<weight> 2.529 2.530 -Set simple sEDF scheduler parameters. Use the following parametersin order. 2.531 +Set Simple EDF scheduler parameters. This scheduler provides weighted 2.532 +CPU sharing in an intuitive way and uses realtime-algorithms to ensure 2.533 +time guarantees. For more information see 2.534 +docs/misc/sedf_scheduler_mini-HOWTO.txt in the Xen distribution. 2.535 + 2.536 +B<PARAMETERS> 2.537 2.538 =over 4 2.539 2.540 -Parameters: 2.541 +=item I<period> 2.542 + 2.543 +The normal EDF scheduling usage in nanosecs 2.544 + 2.545 +=item I<slice> 2.546 + 2.547 +The normal EDF scheduling usage in nanosecs 2.548 + 2.549 +FIXME: these are lame, should explain more. 2.550 2.551 - period - in nanoseconds 2.552 - slice - in nanoseconds 2.553 - latency-hint - scaled period if domain is doing heavy I/O 2.554 - extratime - flag for allowing domain to run in extra time. 2.555 - weight - another way of setting cpu slice. 2.556 +=item I<latency-hint> 2.557 + 2.558 +Scaled period if domain is doing heavy I/O. 2.559 + 2.560 +=item I<extratime> 2.561 + 2.562 +Flag for allowing domain to run in extra time. 2.563 + 2.564 +=item I<weight> 2.565 + 2.566 +Another way of setting cpu slice. 2.567 2.568 =back 2.569 2.570 +B<EXAMPLES> 2.571 + 2.572 +I<normal EDF (20ms/5ms):> 2.573 + 2.574 + xm sched-sedf <dom-id> 20000000 5000000 0 0 0 2.575 + 2.576 +I<best-effort domains (i.e. non-realtime):> 2.577 + 2.578 + xm sched-sedf <dom-id> 20000000 0 0 1 0 2.579 + 2.580 +I<normal EDF (20ms/5ms) + share of extra-time:> 2.581 + 2.582 + xm sched-sedf <dom-id> 20000000 5000000 0 1 0 2.583 + 2.584 +I<4 domains with weights 2:3:4:2> 2.585 + 2.586 + xm sched-sedf <d1> 0 0 0 0 2 2.587 + xm sched-sedf <d2> 0 0 0 0 3 2.588 + xm sched-sedf <d3> 0 0 0 0 4 2.589 + xm sched-sedf <d4> 0 0 0 0 2 2.590 + 2.591 +I<1 fully-specified (10ms/3ms) domain, 3 other domains share available 2.592 +rest in 2:7:3 ratio:> 2.593 + 2.594 + xm sched-sedf <d1> 10000000 3000000 0 0 0 2.595 + xm sched-sedf <d2> 0 0 0 0 2 2.596 + xm sched-sedf <d3> 0 0 0 0 7 2.597 + xm sched-sedf <d4> 0 0 0 0 3 2.598 + 2.599 =back 2.600 2.601 =head1 VIRTUAL DEVICE COMMANDS 2.602 2.603 +Most virtual devices can be added and removed while guests are 2.604 +running. The effect to the guest OS is much the same as any hotplug 2.605 +event. 2.606 + 2.607 +=head2 BLOCK DEVICES 2.608 + 2.609 =over 4 2.610 2.611 -=item I<block-attach <DomId> <BackDev> <FrontDev> <Mode> [BackDomId] 2.612 +=item B<block-attach> I<domain-id> I<be-dev> I<fe-dev> I<mode> I<[bedomain-id]> 2.613 + 2.614 +Create a new virtual block device 2.615 2.616 -Create a new virtual block device. 2.617 +=item B<block-detach> I<domain-id> I<devid> 2.618 2.619 -=item I<block-detach> <DomId> <DevId> 2.620 +Destroy a domain's virtual block device. DevId may either be a device 2.621 +ID or the device name as mounted in the guest. 2.622 + 2.623 +=item B<block-list> I<domain-id> 2.624 2.625 -Destroy a domain's virtual block device. DevId may either be a device ID or the device name as mounted in the guest. 2.626 +List virtual block devices for a domain. The returned output is 2.627 +sexpression formatted. 2.628 2.629 -=item I<block-list> <DomId> 2.630 +=head2 NETWORK DEVICES 2.631 2.632 -List virtual block devices for a domain. 2.633 +=item B<network-attach> I<domain-id> I<[script=script]> I<[ip=ipaddr]> 2.634 +I<[mac=macaddr]> I<[bridge=bridge-name]> I<[backend=bedomain-id]> 2.635 2.636 -=item I<network-limit> <DomId> <Vif> <Credit> <Period> 2.637 +=item B<network-detach> I<domain-id> I<devid> 2.638 + 2.639 +=item B<network-limit> I<domain-id> I<vif> I<credit> I<period> 2.640 2.641 Limit the transmission rate of a virtual network interface. 2.642 2.643 -=item I<network-list> <DomId> 2.644 +=item B<network-list> I<domain-id> 2.645 2.646 -List virtual network interfaces for a domain. 2.647 +List virtual network interfaces for a domain. The returned output is 2.648 +sexpression formatted. 2.649 2.650 =back 2.651 2.652 =head1 VNET COMMANDS 2.653 2.654 -The Virtual Network interfaces for Xen 2.655 +The Virtual Network interfaces for Xen. 2.656 + 2.657 +FIXME: This needs a lot more explaination, or it needs to be ripped 2.658 +out entirely. 2.659 2.660 =over 4 2.661 2.662 -=item I<vnet-list> [-l|--long] 2.663 +=item B<vnet-list> I<[-l|--long]> 2.664 2.665 List vnets. 2.666 2.667 -=item I<vnet-create> <config> 2.668 +=item B<vnet-create> I<config> 2.669 2.670 Create a vnet from a config file. 2.671 2.672 -=item I<vnet-delete> <vnetid> 2.673 +=item B<vnet-delete> I<vnetid> 2.674 2.675 Delete a vnet. 2.676 2.677 @@ -476,6 +685,12 @@ Delete a vnet. 2.678 2.679 B<xmdomain.cfg>(5) 2.680 2.681 +BVT scheduling paper: K.J. Duda and D.R. Cheriton. Borrowed Virtual 2.682 +Time (BVT) scheduling: supporting latency-sensitive threads in a 2.683 +general purpose scheduler. In proceedings of the 17th ACM SIGOPS 2.684 +Symposium on Operating Systems principles, volume 33(5) of ACM 2.685 +Operating Systems Review, pages 261-267 2.686 + 2.687 =head1 AUTHOR 2.688 2.689 Sean Dague <sean at dague dot net>
3.1 --- a/linux-2.6-xen-sparse/arch/xen/i386/kernel/cpu/common.c Tue Nov 15 15:56:47 2005 +0100 3.2 +++ b/linux-2.6-xen-sparse/arch/xen/i386/kernel/cpu/common.c Tue Nov 15 16:24:31 2005 +0100 3.3 @@ -572,7 +572,7 @@ void __cpuinit cpu_gdt_init(struct Xgt_d 3.4 va < gdt_descr->address + gdt_descr->size; 3.5 va += PAGE_SIZE, f++) { 3.6 frames[f] = virt_to_mfn(va); 3.7 - make_page_readonly((void *)va); 3.8 + make_lowmem_page_readonly((void *)va); 3.9 } 3.10 if (HYPERVISOR_set_gdt(frames, gdt_descr->size / 8)) 3.11 BUG();
4.1 --- a/linux-2.6-xen-sparse/arch/xen/i386/mm/init.c Tue Nov 15 15:56:47 2005 +0100 4.2 +++ b/linux-2.6-xen-sparse/arch/xen/i386/mm/init.c Tue Nov 15 16:24:31 2005 +0100 4.3 @@ -68,7 +68,7 @@ static pmd_t * __init one_md_table_init( 4.4 4.5 #ifdef CONFIG_X86_PAE 4.6 pmd_table = (pmd_t *) alloc_bootmem_low_pages(PAGE_SIZE); 4.7 - make_page_readonly(pmd_table); 4.8 + make_lowmem_page_readonly(pmd_table); 4.9 set_pgd(pgd, __pgd(__pa(pmd_table) | _PAGE_PRESENT)); 4.10 pud = pud_offset(pgd, 0); 4.11 if (pmd_table != pmd_offset(pud, 0)) 4.12 @@ -89,7 +89,7 @@ static pte_t * __init one_page_table_ini 4.13 { 4.14 if (pmd_none(*pmd)) { 4.15 pte_t *page_table = (pte_t *) alloc_bootmem_low_pages(PAGE_SIZE); 4.16 - make_page_readonly(page_table); 4.17 + make_lowmem_page_readonly(page_table); 4.18 set_pmd(pmd, __pmd(__pa(page_table) | _PAGE_TABLE)); 4.19 if (page_table != pte_offset_kernel(pmd, 0)) 4.20 BUG();
5.1 --- a/linux-2.6-xen-sparse/arch/xen/i386/mm/pgtable.c Tue Nov 15 15:56:47 2005 +0100 5.2 +++ b/linux-2.6-xen-sparse/arch/xen/i386/mm/pgtable.c Tue Nov 15 16:24:31 2005 +0100 5.3 @@ -199,7 +199,7 @@ pte_t *pte_alloc_one_kernel(struct mm_st 5.4 { 5.5 pte_t *pte = (pte_t *)__get_free_page(GFP_KERNEL|__GFP_REPEAT|__GFP_ZERO); 5.6 if (pte) 5.7 - make_page_readonly(pte); 5.8 + make_lowmem_page_readonly(pte); 5.9 return pte; 5.10 } 5.11 5.12 @@ -336,7 +336,7 @@ pgd_t *pgd_alloc(struct mm_struct *mm) 5.13 spin_lock_irqsave(&pgd_lock, flags); 5.14 memcpy(pmd, copy_pmd, PAGE_SIZE); 5.15 spin_unlock_irqrestore(&pgd_lock, flags); 5.16 - make_page_readonly(pmd); 5.17 + make_lowmem_page_readonly(pmd); 5.18 set_pgd(&pgd[USER_PTRS_PER_PGD], __pgd(1 + __pa(pmd))); 5.19 } 5.20 5.21 @@ -367,12 +367,12 @@ void pgd_free(pgd_t *pgd) 5.22 if (PTRS_PER_PMD > 1) { 5.23 for (i = 0; i < USER_PTRS_PER_PGD; ++i) { 5.24 pmd_t *pmd = (void *)__va(pgd_val(pgd[i])-1); 5.25 - make_page_writable(pmd); 5.26 + make_lowmem_page_writable(pmd); 5.27 kmem_cache_free(pmd_cache, pmd); 5.28 } 5.29 if (!HAVE_SHARED_KERNEL_PMD) { 5.30 pmd_t *pmd = (void *)__va(pgd_val(pgd[USER_PTRS_PER_PGD])-1); 5.31 - make_page_writable(pmd); 5.32 + make_lowmem_page_writable(pmd); 5.33 memset(pmd, 0, PTRS_PER_PMD*sizeof(pmd_t)); 5.34 kmem_cache_free(pmd_cache, pmd); 5.35 } 5.36 @@ -382,6 +382,7 @@ void pgd_free(pgd_t *pgd) 5.37 } 5.38 5.39 #ifndef CONFIG_XEN_SHADOW_MODE 5.40 +asmlinkage int xprintk(const char *fmt, ...); 5.41 void make_lowmem_page_readonly(void *va) 5.42 { 5.43 pte_t *pte = virt_to_ptep(va); 5.44 @@ -399,8 +400,7 @@ void make_page_readonly(void *va) 5.45 pte_t *pte = virt_to_ptep(va); 5.46 set_pte(pte, pte_wrprotect(*pte)); 5.47 if ((unsigned long)va >= (unsigned long)high_memory) { 5.48 - unsigned long pfn; 5.49 - pfn = pte_pfn(*pte); 5.50 + unsigned long pfn = pte_pfn(*pte); 5.51 #ifdef CONFIG_HIGHMEM 5.52 if (pfn < highstart_pfn) 5.53 #endif 5.54 @@ -414,8 +414,7 @@ void make_page_writable(void *va) 5.55 pte_t *pte = virt_to_ptep(va); 5.56 set_pte(pte, pte_mkwrite(*pte)); 5.57 if ((unsigned long)va >= (unsigned long)high_memory) { 5.58 - unsigned long pfn; 5.59 - pfn = pte_pfn(*pte); 5.60 + unsigned long pfn = pte_pfn(*pte); 5.61 #ifdef CONFIG_HIGHMEM 5.62 if (pfn < highstart_pfn) 5.63 #endif
6.1 --- a/linux-2.6-xen-sparse/arch/xen/kernel/smpboot.c Tue Nov 15 15:56:47 2005 +0100 6.2 +++ b/linux-2.6-xen-sparse/arch/xen/kernel/smpboot.c Tue Nov 15 16:24:31 2005 +0100 6.3 @@ -344,7 +344,7 @@ static int __init setup_vcpu_hotplug_eve 6.4 return 0; 6.5 } 6.6 6.7 -subsys_initcall(setup_vcpu_hotplug_event); 6.8 +arch_initcall(setup_vcpu_hotplug_event); 6.9 6.10 int __cpu_disable(void) 6.11 {
7.1 --- a/tools/Makefile Tue Nov 15 15:56:47 2005 +0100 7.2 +++ b/tools/Makefile Tue Nov 15 16:24:31 2005 +0100 7.3 @@ -11,6 +11,7 @@ SUBDIRS += xcutils 7.4 SUBDIRS += firmware 7.5 SUBDIRS += security 7.6 SUBDIRS += console 7.7 +SUBDIRS += xenmon 7.8 ifeq ($(VTPM_TOOLS),y) 7.9 SUBDIRS += vtpm_manager 7.10 SUBDIRS += vtpm
8.1 --- a/tools/ioemu/target-i386-dm/helper2.c Tue Nov 15 15:56:47 2005 +0100 8.2 +++ b/tools/ioemu/target-i386-dm/helper2.c Tue Nov 15 16:24:31 2005 +0100 8.3 @@ -416,6 +416,7 @@ int main_loop(void) 8.4 FD_ZERO(&wakeup_rfds); 8.5 FD_SET(evtchn_fd, &wakeup_rfds); 8.6 highest_fds = evtchn_fd; 8.7 + env->send_event = 0; 8.8 while (1) { 8.9 if (vm_running) { 8.10 if (shutdown_requested) { 8.11 @@ -431,7 +432,6 @@ int main_loop(void) 8.12 tv.tv_sec = 0; 8.13 tv.tv_usec = 100000; 8.14 8.15 - env->send_event = 0; 8.16 retval = select(highest_fds+1, &wakeup_rfds, NULL, NULL, &tv); 8.17 if (retval == -1) { 8.18 perror("select"); 8.19 @@ -447,12 +447,13 @@ int main_loop(void) 8.20 #define ULONGLONG_MAX ULONG_MAX 8.21 #endif 8.22 8.23 - main_loop_wait(0); 8.24 tun_receive_handler(&rfds); 8.25 if ( FD_ISSET(evtchn_fd, &rfds) ) { 8.26 cpu_handle_ioreq(env); 8.27 } 8.28 + main_loop_wait(0); 8.29 if (env->send_event) { 8.30 + env->send_event = 0; 8.31 struct ioctl_evtchn_notify notify; 8.32 notify.port = ioreq_local_port; 8.33 (void)ioctl(evtchn_fd, IOCTL_EVTCHN_NOTIFY, ¬ify);
9.1 --- /dev/null Thu Jan 01 00:00:00 1970 +0000 9.2 +++ b/tools/xenmon/COPYING Tue Nov 15 16:24:31 2005 +0100 9.3 @@ -0,0 +1,340 @@ 9.4 + GNU GENERAL PUBLIC LICENSE 9.5 + Version 2, June 1991 9.6 + 9.7 + Copyright (C) 1989, 1991 Free Software Foundation, Inc. 9.8 + 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 9.9 + Everyone is permitted to copy and distribute verbatim copies 9.10 + of this license document, but changing it is not allowed. 9.11 + 9.12 + Preamble 9.13 + 9.14 + The licenses for most software are designed to take away your 9.15 +freedom to share and change it. By contrast, the GNU General Public 9.16 +License is intended to guarantee your freedom to share and change free 9.17 +software--to make sure the software is free for all its users. This 9.18 +General Public License applies to most of the Free Software 9.19 +Foundation's software and to any other program whose authors commit to 9.20 +using it. (Some other Free Software Foundation software is covered by 9.21 +the GNU Library General Public License instead.) You can apply it to 9.22 +your programs, too. 9.23 + 9.24 + When we speak of free software, we are referring to freedom, not 9.25 +price. Our General Public Licenses are designed to make sure that you 9.26 +have the freedom to distribute copies of free software (and charge for 9.27 +this service if you wish), that you receive source code or can get it 9.28 +if you want it, that you can change the software or use pieces of it 9.29 +in new free programs; and that you know you can do these things. 9.30 + 9.31 + To protect your rights, we need to make restrictions that forbid 9.32 +anyone to deny you these rights or to ask you to surrender the rights. 9.33 +These restrictions translate to certain responsibilities for you if you 9.34 +distribute copies of the software, or if you modify it. 9.35 + 9.36 + For example, if you distribute copies of such a program, whether 9.37 +gratis or for a fee, you must give the recipients all the rights that 9.38 +you have. You must make sure that they, too, receive or can get the 9.39 +source code. And you must show them these terms so they know their 9.40 +rights. 9.41 + 9.42 + We protect your rights with two steps: (1) copyright the software, and 9.43 +(2) offer you this license which gives you legal permission to copy, 9.44 +distribute and/or modify the software. 9.45 + 9.46 + Also, for each author's protection and ours, we want to make certain 9.47 +that everyone understands that there is no warranty for this free 9.48 +software. If the software is modified by someone else and passed on, we 9.49 +want its recipients to know that what they have is not the original, so 9.50 +that any problems introduced by others will not reflect on the original 9.51 +authors' reputations. 9.52 + 9.53 + Finally, any free program is threatened constantly by software 9.54 +patents. We wish to avoid the danger that redistributors of a free 9.55 +program will individually obtain patent licenses, in effect making the 9.56 +program proprietary. To prevent this, we have made it clear that any 9.57 +patent must be licensed for everyone's free use or not licensed at all. 9.58 + 9.59 + The precise terms and conditions for copying, distribution and 9.60 +modification follow. 9.61 + 9.62 + GNU GENERAL PUBLIC LICENSE 9.63 + TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 9.64 + 9.65 + 0. This License applies to any program or other work which contains 9.66 +a notice placed by the copyright holder saying it may be distributed 9.67 +under the terms of this General Public License. The "Program", below, 9.68 +refers to any such program or work, and a "work based on the Program" 9.69 +means either the Program or any derivative work under copyright law: 9.70 +that is to say, a work containing the Program or a portion of it, 9.71 +either verbatim or with modifications and/or translated into another 9.72 +language. (Hereinafter, translation is included without limitation in 9.73 +the term "modification".) Each licensee is addressed as "you". 9.74 + 9.75 +Activities other than copying, distribution and modification are not 9.76 +covered by this License; they are outside its scope. The act of 9.77 +running the Program is not restricted, and the output from the Program 9.78 +is covered only if its contents constitute a work based on the 9.79 +Program (independent of having been made by running the Program). 9.80 +Whether that is true depends on what the Program does. 9.81 + 9.82 + 1. You may copy and distribute verbatim copies of the Program's 9.83 +source code as you receive it, in any medium, provided that you 9.84 +conspicuously and appropriately publish on each copy an appropriate 9.85 +copyright notice and disclaimer of warranty; keep intact all the 9.86 +notices that refer to this License and to the absence of any warranty; 9.87 +and give any other recipients of the Program a copy of this License 9.88 +along with the Program. 9.89 + 9.90 +You may charge a fee for the physical act of transferring a copy, and 9.91 +you may at your option offer warranty protection in exchange for a fee. 9.92 + 9.93 + 2. You may modify your copy or copies of the Program or any portion 9.94 +of it, thus forming a work based on the Program, and copy and 9.95 +distribute such modifications or work under the terms of Section 1 9.96 +above, provided that you also meet all of these conditions: 9.97 + 9.98 + a) You must cause the modified files to carry prominent notices 9.99 + stating that you changed the files and the date of any change. 9.100 + 9.101 + b) You must cause any work that you distribute or publish, that in 9.102 + whole or in part contains or is derived from the Program or any 9.103 + part thereof, to be licensed as a whole at no charge to all third 9.104 + parties under the terms of this License. 9.105 + 9.106 + c) If the modified program normally reads commands interactively 9.107 + when run, you must cause it, when started running for such 9.108 + interactive use in the most ordinary way, to print or display an 9.109 + announcement including an appropriate copyright notice and a 9.110 + notice that there is no warranty (or else, saying that you provide 9.111 + a warranty) and that users may redistribute the program under 9.112 + these conditions, and telling the user how to view a copy of this 9.113 + License. (Exception: if the Program itself is interactive but 9.114 + does not normally print such an announcement, your work based on 9.115 + the Program is not required to print an announcement.) 9.116 + 9.117 +These requirements apply to the modified work as a whole. If 9.118 +identifiable sections of that work are not derived from the Program, 9.119 +and can be reasonably considered independent and separate works in 9.120 +themselves, then this License, and its terms, do not apply to those 9.121 +sections when you distribute them as separate works. But when you 9.122 +distribute the same sections as part of a whole which is a work based 9.123 +on the Program, the distribution of the whole must be on the terms of 9.124 +this License, whose permissions for other licensees extend to the 9.125 +entire whole, and thus to each and every part regardless of who wrote it. 9.126 + 9.127 +Thus, it is not the intent of this section to claim rights or contest 9.128 +your rights to work written entirely by you; rather, the intent is to 9.129 +exercise the right to control the distribution of derivative or 9.130 +collective works based on the Program. 9.131 + 9.132 +In addition, mere aggregation of another work not based on the Program 9.133 +with the Program (or with a work based on the Program) on a volume of 9.134 +a storage or distribution medium does not bring the other work under 9.135 +the scope of this License. 9.136 + 9.137 + 3. You may copy and distribute the Program (or a work based on it, 9.138 +under Section 2) in object code or executable form under the terms of 9.139 +Sections 1 and 2 above provided that you also do one of the following: 9.140 + 9.141 + a) Accompany it with the complete corresponding machine-readable 9.142 + source code, which must be distributed under the terms of Sections 9.143 + 1 and 2 above on a medium customarily used for software interchange; or, 9.144 + 9.145 + b) Accompany it with a written offer, valid for at least three 9.146 + years, to give any third party, for a charge no more than your 9.147 + cost of physically performing source distribution, a complete 9.148 + machine-readable copy of the corresponding source code, to be 9.149 + distributed under the terms of Sections 1 and 2 above on a medium 9.150 + customarily used for software interchange; or, 9.151 + 9.152 + c) Accompany it with the information you received as to the offer 9.153 + to distribute corresponding source code. (This alternative is 9.154 + allowed only for noncommercial distribution and only if you 9.155 + received the program in object code or executable form with such 9.156 + an offer, in accord with Subsection b above.) 9.157 + 9.158 +The source code for a work means the preferred form of the work for 9.159 +making modifications to it. For an executable work, complete source 9.160 +code means all the source code for all modules it contains, plus any 9.161 +associated interface definition files, plus the scripts used to 9.162 +control compilation and installation of the executable. However, as a 9.163 +special exception, the source code distributed need not include 9.164 +anything that is normally distributed (in either source or binary 9.165 +form) with the major components (compiler, kernel, and so on) of the 9.166 +operating system on which the executable runs, unless that component 9.167 +itself accompanies the executable. 9.168 + 9.169 +If distribution of executable or object code is made by offering 9.170 +access to copy from a designated place, then offering equivalent 9.171 +access to copy the source code from the same place counts as 9.172 +distribution of the source code, even though third parties are not 9.173 +compelled to copy the source along with the object code. 9.174 + 9.175 + 4. You may not copy, modify, sublicense, or distribute the Program 9.176 +except as expressly provided under this License. Any attempt 9.177 +otherwise to copy, modify, sublicense or distribute the Program is 9.178 +void, and will automatically terminate your rights under this License. 9.179 +However, parties who have received copies, or rights, from you under 9.180 +this License will not have their licenses terminated so long as such 9.181 +parties remain in full compliance. 9.182 + 9.183 + 5. You are not required to accept this License, since you have not 9.184 +signed it. However, nothing else grants you permission to modify or 9.185 +distribute the Program or its derivative works. These actions are 9.186 +prohibited by law if you do not accept this License. Therefore, by 9.187 +modifying or distributing the Program (or any work based on the 9.188 +Program), you indicate your acceptance of this License to do so, and 9.189 +all its terms and conditions for copying, distributing or modifying 9.190 +the Program or works based on it. 9.191 + 9.192 + 6. Each time you redistribute the Program (or any work based on the 9.193 +Program), the recipient automatically receives a license from the 9.194 +original licensor to copy, distribute or modify the Program subject to 9.195 +these terms and conditions. You may not impose any further 9.196 +restrictions on the recipients' exercise of the rights granted herein. 9.197 +You are not responsible for enforcing compliance by third parties to 9.198 +this License. 9.199 + 9.200 + 7. If, as a consequence of a court judgment or allegation of patent 9.201 +infringement or for any other reason (not limited to patent issues), 9.202 +conditions are imposed on you (whether by court order, agreement or 9.203 +otherwise) that contradict the conditions of this License, they do not 9.204 +excuse you from the conditions of this License. If you cannot 9.205 +distribute so as to satisfy simultaneously your obligations under this 9.206 +License and any other pertinent obligations, then as a consequence you 9.207 +may not distribute the Program at all. For example, if a patent 9.208 +license would not permit royalty-free redistribution of the Program by 9.209 +all those who receive copies directly or indirectly through you, then 9.210 +the only way you could satisfy both it and this License would be to 9.211 +refrain entirely from distribution of the Program. 9.212 + 9.213 +If any portion of this section is held invalid or unenforceable under 9.214 +any particular circumstance, the balance of the section is intended to 9.215 +apply and the section as a whole is intended to apply in other 9.216 +circumstances. 9.217 + 9.218 +It is not the purpose of this section to induce you to infringe any 9.219 +patents or other property right claims or to contest validity of any 9.220 +such claims; this section has the sole purpose of protecting the 9.221 +integrity of the free software distribution system, which is 9.222 +implemented by public license practices. Many people have made 9.223 +generous contributions to the wide range of software distributed 9.224 +through that system in reliance on consistent application of that 9.225 +system; it is up to the author/donor to decide if he or she is willing 9.226 +to distribute software through any other system and a licensee cannot 9.227 +impose that choice. 9.228 + 9.229 +This section is intended to make thoroughly clear what is believed to 9.230 +be a consequence of the rest of this License. 9.231 + 9.232 + 8. If the distribution and/or use of the Program is restricted in 9.233 +certain countries either by patents or by copyrighted interfaces, the 9.234 +original copyright holder who places the Program under this License 9.235 +may add an explicit geographical distribution limitation excluding 9.236 +those countries, so that distribution is permitted only in or among 9.237 +countries not thus excluded. In such case, this License incorporates 9.238 +the limitation as if written in the body of this License. 9.239 + 9.240 + 9. The Free Software Foundation may publish revised and/or new versions 9.241 +of the General Public License from time to time. Such new versions will 9.242 +be similar in spirit to the present version, but may differ in detail to 9.243 +address new problems or concerns. 9.244 + 9.245 +Each version is given a distinguishing version number. If the Program 9.246 +specifies a version number of this License which applies to it and "any 9.247 +later version", you have the option of following the terms and conditions 9.248 +either of that version or of any later version published by the Free 9.249 +Software Foundation. If the Program does not specify a version number of 9.250 +this License, you may choose any version ever published by the Free Software 9.251 +Foundation. 9.252 + 9.253 + 10. If you wish to incorporate parts of the Program into other free 9.254 +programs whose distribution conditions are different, write to the author 9.255 +to ask for permission. For software which is copyrighted by the Free 9.256 +Software Foundation, write to the Free Software Foundation; we sometimes 9.257 +make exceptions for this. Our decision will be guided by the two goals 9.258 +of preserving the free status of all derivatives of our free software and 9.259 +of promoting the sharing and reuse of software generally. 9.260 + 9.261 + NO WARRANTY 9.262 + 9.263 + 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY 9.264 +FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN 9.265 +OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES 9.266 +PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED 9.267 +OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF 9.268 +MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS 9.269 +TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE 9.270 +PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, 9.271 +REPAIR OR CORRECTION. 9.272 + 9.273 + 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING 9.274 +WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR 9.275 +REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, 9.276 +INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING 9.277 +OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED 9.278 +TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY 9.279 +YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER 9.280 +PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE 9.281 +POSSIBILITY OF SUCH DAMAGES. 9.282 + 9.283 + END OF TERMS AND CONDITIONS 9.284 + 9.285 + How to Apply These Terms to Your New Programs 9.286 + 9.287 + If you develop a new program, and you want it to be of the greatest 9.288 +possible use to the public, the best way to achieve this is to make it 9.289 +free software which everyone can redistribute and change under these terms. 9.290 + 9.291 + To do so, attach the following notices to the program. It is safest 9.292 +to attach them to the start of each source file to most effectively 9.293 +convey the exclusion of warranty; and each file should have at least 9.294 +the "copyright" line and a pointer to where the full notice is found. 9.295 + 9.296 + <one line to give the program's name and a brief idea of what it does.> 9.297 + Copyright (C) <year> <name of author> 9.298 + 9.299 + This program is free software; you can redistribute it and/or modify 9.300 + it under the terms of the GNU General Public License as published by 9.301 + the Free Software Foundation; either version 2 of the License, or 9.302 + (at your option) any later version. 9.303 + 9.304 + This program is distributed in the hope that it will be useful, 9.305 + but WITHOUT ANY WARRANTY; without even the implied warranty of 9.306 + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 9.307 + GNU General Public License for more details. 9.308 + 9.309 + You should have received a copy of the GNU General Public License 9.310 + along with this program; if not, write to the Free Software 9.311 + Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 9.312 + 9.313 + 9.314 +Also add information on how to contact you by electronic and paper mail. 9.315 + 9.316 +If the program is interactive, make it output a short notice like this 9.317 +when it starts in an interactive mode: 9.318 + 9.319 + Gnomovision version 69, Copyright (C) year name of author 9.320 + Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. 9.321 + This is free software, and you are welcome to redistribute it 9.322 + under certain conditions; type `show c' for details. 9.323 + 9.324 +The hypothetical commands `show w' and `show c' should show the appropriate 9.325 +parts of the General Public License. Of course, the commands you use may 9.326 +be called something other than `show w' and `show c'; they could even be 9.327 +mouse-clicks or menu items--whatever suits your program. 9.328 + 9.329 +You should also get your employer (if you work as a programmer) or your 9.330 +school, if any, to sign a "copyright disclaimer" for the program, if 9.331 +necessary. Here is a sample; alter the names: 9.332 + 9.333 + Yoyodyne, Inc., hereby disclaims all copyright interest in the program 9.334 + `Gnomovision' (which makes passes at compilers) written by James Hacker. 9.335 + 9.336 + <signature of Ty Coon>, 1 April 1989 9.337 + Ty Coon, President of Vice 9.338 + 9.339 +This General Public License does not permit incorporating your program into 9.340 +proprietary programs. If your program is a subroutine library, you may 9.341 +consider it more useful to permit linking proprietary applications with the 9.342 +library. If this is what you want to do, use the GNU Library General 9.343 +Public License instead of this License.
10.1 --- /dev/null Thu Jan 01 00:00:00 1970 +0000 10.2 +++ b/tools/xenmon/Makefile Tue Nov 15 16:24:31 2005 +0100 10.3 @@ -0,0 +1,51 @@ 10.4 +# Copyright (C) HP Labs, Palo Alto and Fort Collins, 2005 10.5 +# Author: Diwaker Gupta <diwaker.gupta@hp.com> 10.6 +# 10.7 +# This program is free software; you can redistribute it and/or modify 10.8 +# it under the terms of the GNU General Public License as published by 10.9 +# the Free Software Foundation; under version 2 of the License. 10.10 +# 10.11 +# This program is distributed in the hope that it will be useful, 10.12 +# but WITHOUT ANY WARRANTY; without even the implied warranty of 10.13 +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 10.14 +# GNU General Public License for more details. 10.15 + 10.16 +INSTALL = install 10.17 +INSTALL_PROG = $(INSTALL) -m0755 10.18 +INSTALL_DIR = $(INSTALL) -d -m0755 10.19 +INSTALL_DATA = $(INSTALL) -m064 10.20 + 10.21 +prefix=/usr/local 10.22 +mandir=$(prefix)/share/man 10.23 +man1dir=$(mandir)/man1 10.24 +sbindir=$(prefix)/sbin 10.25 + 10.26 +XEN_ROOT=../.. 10.27 +include $(XEN_ROOT)/tools/Rules.mk 10.28 + 10.29 +CFLAGS += -Wall -Werror -g 10.30 +CFLAGS += -I $(XEN_XC) 10.31 +CFLAGS += -I $(XEN_LIBXC) 10.32 +LDFLAGS += -L $(XEN_LIBXC) 10.33 + 10.34 +BIN = setmask xenbaked 10.35 +SCRIPTS = xenmon.py 10.36 + 10.37 +all: build 10.38 + 10.39 +build: $(BIN) 10.40 + 10.41 +install: xenbaked setmask 10.42 + [ -d $(DESTDIR)$(sbindir) ] || $(INSTALL_DIR) $(DESTDIR)$(sbindir) 10.43 + $(INSTALL_PROG) xenbaked $(DESTDIR)$(sbindir)/xenbaked 10.44 + $(INSTALL_PROG) setmask $(DESTDIR)$(sbindir)/setmask 10.45 + $(INSTALL_PROG) xenmon.py $(DESTDIR)$(sbindir)/xenmon.py 10.46 + 10.47 +clean: 10.48 + rm -f $(BIN) 10.49 + 10.50 + 10.51 +%: %.c Makefile 10.52 + $(CC) $(CFLAGS) $(LDFLAGS) -lxenctrl -o $@ $< 10.53 + 10.54 +
11.1 --- /dev/null Thu Jan 01 00:00:00 1970 +0000 11.2 +++ b/tools/xenmon/README Tue Nov 15 16:24:31 2005 +0100 11.3 @@ -0,0 +1,104 @@ 11.4 +Xen Performance Monitor 11.5 +----------------------- 11.6 + 11.7 +The xenmon tools make use of the existing xen tracing feature to provide fine 11.8 +grained reporting of various domain related metrics. It should be stressed that 11.9 +the xenmon.py script included here is just an example of the data that may be 11.10 +displayed. The xenbake demon keeps a large amount of history in a shared memory 11.11 +area that may be accessed by tools such as xenmon. 11.12 + 11.13 +For each domain, xenmon reports various metrics. One part of the display is a 11.14 +group of metrics that have been accumulated over the last second, while another 11.15 +part of the display shows data measured over 10 seconds. Other measurement 11.16 +intervals are possible, but we have just chosen 1s and 10s as an example. 11.17 + 11.18 + 11.19 +Execution Count 11.20 +--------------- 11.21 + o The number of times that a domain was scheduled to run (ie, dispatched) over 11.22 + the measurement interval 11.23 + 11.24 + 11.25 +CPU usage 11.26 +--------- 11.27 + o Total time used over the measurement interval 11.28 + o Usage expressed as a percentage of the measurement interval 11.29 + o Average cpu time used during each execution of the domain 11.30 + 11.31 + 11.32 +Waiting time 11.33 +------------ 11.34 +This is how much time the domain spent waiting to run, or put another way, the 11.35 +amount of time the domain spent in the "runnable" state (or on the run queue) 11.36 +but not actually running. Xenmon displays: 11.37 + 11.38 + o Total time waiting over the measurement interval 11.39 + o Wait time expressed as a percentage of the measurement interval 11.40 + o Average waiting time for each execution of the domain 11.41 + 11.42 +Blocked time 11.43 +------------ 11.44 +This is how much time the domain spent blocked (or sleeping); Put another way, 11.45 +the amount of time the domain spent not needing/wanting the cpu because it was 11.46 +waiting for some event (ie, I/O). Xenmon reports: 11.47 + 11.48 + o Total time blocked over the measurement interval 11.49 + o Blocked time expressed as a percentage of the measurement interval 11.50 + o Blocked time per I/O (see I/O count below) 11.51 + 11.52 +Allocation time 11.53 +--------------- 11.54 +This is how much cpu time was allocated to the domain by the scheduler; This is 11.55 +distinct from cpu usage since the "time slice" given to a domain is frequently 11.56 +cut short for one reason or another, ie, the domain requests I/O and blocks. 11.57 +Xenmon reports: 11.58 + 11.59 + o Average allocation time per execution (ie, time slice) 11.60 + o Min and Max allocation times 11.61 + 11.62 +I/O Count 11.63 +--------- 11.64 +This is a rough measure of I/O requested by the domain. The number of page 11.65 +exchanges (or page "flips") between the domain and dom0 are counted. The 11.66 +number of pages exchanged may not accurately reflect the number of bytes 11.67 +transferred to/from a domain due to partial pages being used by the network 11.68 +protocols, etc. But it does give a good sense of the magnitude of I/O being 11.69 +requested by a domain. Xenmon reports: 11.70 + 11.71 + o Total number of page exchanges during the measurement interval 11.72 + o Average number of page exchanges per execution of the domain 11.73 + 11.74 + 11.75 +Usage Notes and issues 11.76 +---------------------- 11.77 + - Start xenmon by simply running xenmon.py; The xenbake demon is started and 11.78 + stopped automatically by xenmon. 11.79 + - To see the various options for xenmon, run xenmon -h. Ditto for xenbaked. 11.80 + - xenmon also has an option (-n) to output log data to a file instead of the 11.81 + curses interface. 11.82 + - NDOMAINS is defined to be 32, but can be changed by recompiling xenbaked 11.83 + - Xenmon.py appears to create 1-2% cpu overhead; Part of this is just the 11.84 + overhead of the python interpreter. Part of it may be the number of trace 11.85 + records being generated. The number of trace records generated can be 11.86 + limited by setting the trace mask (with a dom0 Op), which controls which 11.87 + events cause a trace record to be emitted. 11.88 + - To exit xenmon, type 'q' 11.89 + - To cycle the display to other physical cpu's, type 'c' 11.90 + 11.91 +Future Work 11.92 +----------- 11.93 +o RPC interface to allow external entities to programmatically access processed data 11.94 +o I/O Count batching to reduce number of trace records generated 11.95 + 11.96 +Case Study 11.97 +---------- 11.98 +We have written a case study which demonstrates some of the usefulness of 11.99 +this tool and the metrics reported. It is available at: 11.100 +http://www.hpl.hp.com/techreports/2005/HPL-2005-187.html 11.101 + 11.102 +Authors 11.103 +------- 11.104 +Diwaker Gupta <diwaker.gupta@hp.com> 11.105 +Rob Gardner <rob.gardner@hp.com> 11.106 +Lucy Cherkasova <lucy.cherkasova.hp.com> 11.107 +
12.1 --- /dev/null Thu Jan 01 00:00:00 1970 +0000 12.2 +++ b/tools/xenmon/setmask.c Tue Nov 15 16:24:31 2005 +0100 12.3 @@ -0,0 +1,90 @@ 12.4 +/****************************************************************************** 12.5 + * tools/xenmon/setmask.c 12.6 + * 12.7 + * Simple utility for getting/setting the event mask 12.8 + * 12.9 + * Copyright (C) 2005 by Hewlett-Packard, Palo Alto and Fort Collins 12.10 + * 12.11 + * Authors: Lucy Cherkasova, lucy.cherkasova.hp.com 12.12 + * Rob Gardner, rob.gardner@hp.com 12.13 + * Diwaker Gupta, diwaker.gupta@hp.com 12.14 + * Date: August, 2005 12.15 + * 12.16 + * This program is free software; you can redistribute it and/or modify 12.17 + * it under the terms of the GNU General Public License as published by 12.18 + * the Free Software Foundation; under version 2 of the License. 12.19 + * 12.20 + * This program is distributed in the hope that it will be useful, 12.21 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12.22 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12.23 + * GNU General Public License for more details. 12.24 + * 12.25 + * You should have received a copy of the GNU General Public License 12.26 + * along with this program; if not, write to the Free Software 12.27 + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 12.28 + */ 12.29 + 12.30 +#include <stdlib.h> 12.31 +#include <stdio.h> 12.32 +#include <sys/types.h> 12.33 +#include <fcntl.h> 12.34 +#include <unistd.h> 12.35 +#include <errno.h> 12.36 +#include <getopt.h> 12.37 +#include <xenctrl.h> 12.38 +#include <xen/xen.h> 12.39 +typedef struct { int counter; } atomic_t; 12.40 +#include <xen/trace.h> 12.41 + 12.42 +#define XENMON (TRC_SCHED_DOM_ADD | TRC_SCHED_DOM_REM | TRC_SCHED_SWITCH_INFPREV | TRC_SCHED_SWITCH_INFNEXT | TRC_SCHED_BLOCK | TRC_SCHED_SLEEP | TRC_SCHED_WAKE | TRC_MEM_PAGE_GRANT_TRANSFER) 12.43 + 12.44 +int main(int argc, char * argv[]) 12.45 +{ 12.46 + 12.47 + dom0_op_t op; 12.48 + int ret; 12.49 + 12.50 + int xc_handle = xc_interface_open(); 12.51 + op.cmd = DOM0_TBUFCONTROL; 12.52 + op.interface_version = DOM0_INTERFACE_VERSION; 12.53 + op.u.tbufcontrol.op = DOM0_TBUF_GET_INFO; 12.54 + ret = xc_dom0_op(xc_handle, &op); 12.55 + if ( ret != 0 ) 12.56 + { 12.57 + perror("Failure to get event mask from Xen"); 12.58 + exit(1); 12.59 + } 12.60 + else 12.61 + { 12.62 + printf("Current event mask: 0x%.8x\n", op.u.tbufcontrol.evt_mask); 12.63 + } 12.64 + 12.65 + op.cmd = DOM0_TBUFCONTROL; 12.66 + op.interface_version = DOM0_INTERFACE_VERSION; 12.67 + op.u.tbufcontrol.op = DOM0_TBUF_SET_EVT_MASK; 12.68 + op.u.tbufcontrol.evt_mask = XENMON; 12.69 + 12.70 + ret = xc_dom0_op(xc_handle, &op); 12.71 + printf("Setting mask to 0x%.8x\n", op.u.tbufcontrol.evt_mask); 12.72 + if ( ret != 0 ) 12.73 + { 12.74 + perror("Failure to get scheduler ID from Xen"); 12.75 + exit(1); 12.76 + } 12.77 + 12.78 + op.cmd = DOM0_TBUFCONTROL; 12.79 + op.interface_version = DOM0_INTERFACE_VERSION; 12.80 + op.u.tbufcontrol.op = DOM0_TBUF_GET_INFO; 12.81 + ret = xc_dom0_op(xc_handle, &op); 12.82 + if ( ret != 0 ) 12.83 + { 12.84 + perror("Failure to get event mask from Xen"); 12.85 + exit(1); 12.86 + } 12.87 + else 12.88 + { 12.89 + printf("Current event mask: 0x%.8x\n", op.u.tbufcontrol.evt_mask); 12.90 + } 12.91 + xc_interface_close(xc_handle); 12.92 + return 0; 12.93 +}
13.1 --- /dev/null Thu Jan 01 00:00:00 1970 +0000 13.2 +++ b/tools/xenmon/xenbaked.c Tue Nov 15 16:24:31 2005 +0100 13.3 @@ -0,0 +1,1029 @@ 13.4 +/****************************************************************************** 13.5 + * tools/xenbaked.c 13.6 + * 13.7 + * Tool for collecting raw trace buffer data from Xen and 13.8 + * performing some accumulation operations and other processing 13.9 + * on it. 13.10 + * 13.11 + * Copyright (C) 2004 by Intel Research Cambridge 13.12 + * Copyright (C) 2005 by Hewlett Packard, Palo Alto and Fort Collins 13.13 + * 13.14 + * Authors: Diwaker Gupta, diwaker.gupta@hp.com 13.15 + * Rob Gardner, rob.gardner@hp.com 13.16 + * Lucy Cherkasova, lucy.cherkasova.hp.com 13.17 + * Much code based on xentrace, authored by Mark Williamson, mark.a.williamson@intel.com 13.18 + * Date: November, 2005 13.19 + * 13.20 + * This program is free software; you can redistribute it and/or modify 13.21 + * it under the terms of the GNU General Public License as published by 13.22 + * the Free Software Foundation; under version 2 of the License. 13.23 + * 13.24 + * This program is distributed in the hope that it will be useful, 13.25 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 13.26 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13.27 + * GNU General Public License for more details. 13.28 + * 13.29 + * You should have received a copy of the GNU General Public License 13.30 + * along with this program; if not, write to the Free Software 13.31 + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 13.32 + */ 13.33 + 13.34 +#include <time.h> 13.35 +#include <stdlib.h> 13.36 +#include <stdio.h> 13.37 +#include <sys/mman.h> 13.38 +#include <sys/stat.h> 13.39 +#include <sys/types.h> 13.40 +#include <fcntl.h> 13.41 +#include <unistd.h> 13.42 +#include <errno.h> 13.43 +#include <argp.h> 13.44 +#include <signal.h> 13.45 +#include <xenctrl.h> 13.46 +#include <xen/xen.h> 13.47 +#include <string.h> 13.48 + 13.49 +#include "xc_private.h" 13.50 +typedef struct { int counter; } atomic_t; 13.51 +#define _atomic_read(v) ((v).counter) 13.52 + 13.53 +#include <xen/trace.h> 13.54 +#include "xenbaked.h" 13.55 + 13.56 +extern FILE *stderr; 13.57 + 13.58 +/***** Compile time configuration of defaults ********************************/ 13.59 + 13.60 +/* when we've got more records than this waiting, we log it to the output */ 13.61 +#define NEW_DATA_THRESH 1 13.62 + 13.63 +/* sleep for this long (milliseconds) between checking the trace buffers */ 13.64 +#define POLL_SLEEP_MILLIS 100 13.65 + 13.66 +/* Size of time period represented by each sample */ 13.67 +#define MS_PER_SAMPLE 100 13.68 + 13.69 +/* CPU Frequency */ 13.70 +#define MHZ 13.71 +#define CPU_FREQ 2660 MHZ 13.72 + 13.73 +/***** The code **************************************************************/ 13.74 + 13.75 +typedef struct settings_st { 13.76 + char *outfile; 13.77 + struct timespec poll_sleep; 13.78 + unsigned long new_data_thresh; 13.79 + unsigned long ms_per_sample; 13.80 + double cpu_freq; 13.81 +} settings_t; 13.82 + 13.83 +settings_t opts; 13.84 + 13.85 +int interrupted = 0; /* gets set if we get a SIGHUP */ 13.86 +int rec_count = 0; 13.87 +time_t start_time; 13.88 +int dom0_flips = 0; 13.89 + 13.90 +_new_qos_data *new_qos; 13.91 +_new_qos_data **cpu_qos_data; 13.92 + 13.93 + 13.94 +#define ID(X) ((X>NDOMAINS-1)?(NDOMAINS-1):X) 13.95 + 13.96 +// array of currently running domains, indexed by cpu 13.97 +int *running = NULL; 13.98 + 13.99 +// number of cpu's on this platform 13.100 +int NCPU = 0; 13.101 + 13.102 + 13.103 +void init_current(int ncpu) 13.104 +{ 13.105 + running = calloc(ncpu, sizeof(int)); 13.106 + NCPU = ncpu; 13.107 + printf("Initialized with %d %s\n", ncpu, (ncpu == 1) ? "cpu" : "cpu's"); 13.108 +} 13.109 + 13.110 +int is_current(int domain, int cpu) 13.111 +{ 13.112 + // int i; 13.113 + 13.114 + // for (i=0; i<NCPU; i++) 13.115 + if (running[cpu] == domain) 13.116 + return 1; 13.117 + return 0; 13.118 +} 13.119 + 13.120 + 13.121 +// return the domain that's currently running on the given cpu 13.122 +int current(int cpu) 13.123 +{ 13.124 + return running[cpu]; 13.125 +} 13.126 + 13.127 +void set_current(int cpu, int domain) 13.128 +{ 13.129 + running[cpu] = domain; 13.130 +} 13.131 + 13.132 + 13.133 + 13.134 +void close_handler(int signal) 13.135 +{ 13.136 + interrupted = 1; 13.137 +} 13.138 + 13.139 +#if 0 13.140 +void dump_record(int cpu, struct t_rec *x) 13.141 +{ 13.142 + printf("record: cpu=%x, tsc=%lx, event=%x, d1=%lx\n", 13.143 + cpu, x->cycles, x->event, x->data[0]); 13.144 +} 13.145 +#endif 13.146 + 13.147 +/** 13.148 + * millis_to_timespec - convert a time in milliseconds to a struct timespec 13.149 + * @millis: time interval in milliseconds 13.150 + */ 13.151 +struct timespec millis_to_timespec(unsigned long millis) 13.152 +{ 13.153 + struct timespec spec; 13.154 + 13.155 + spec.tv_sec = millis / 1000; 13.156 + spec.tv_nsec = (millis % 1000) * 1000; 13.157 + 13.158 + return spec; 13.159 +} 13.160 + 13.161 + 13.162 +typedef struct 13.163 +{ 13.164 + int event_count; 13.165 + int event_id; 13.166 + char *text; 13.167 +} stat_map_t; 13.168 + 13.169 +stat_map_t stat_map[] = { 13.170 + { 0, 0, "Other" }, 13.171 + { 0, TRC_SCHED_DOM_ADD, "Add Domain" }, 13.172 + { 0, TRC_SCHED_DOM_REM, "Remove Domain" }, 13.173 + { 0, TRC_SCHED_SLEEP, "Sleep" }, 13.174 + { 0, TRC_SCHED_WAKE, "Wake" }, 13.175 + { 0, TRC_SCHED_BLOCK, "Block" }, 13.176 + { 0, TRC_SCHED_SWITCH, "Switch" }, 13.177 + { 0, TRC_SCHED_S_TIMER_FN, "Timer Func"}, 13.178 + { 0, TRC_SCHED_SWITCH_INFPREV, "Switch Prev" }, 13.179 + { 0, TRC_SCHED_SWITCH_INFNEXT, "Switch Next" }, 13.180 + { 0, TRC_MEM_PAGE_GRANT_MAP, "Page Map" }, 13.181 + { 0, TRC_MEM_PAGE_GRANT_UNMAP, "Page Unmap" }, 13.182 + { 0, TRC_MEM_PAGE_GRANT_TRANSFER, "Page Transfer" }, 13.183 + { 0, 0, 0 } 13.184 +}; 13.185 + 13.186 + 13.187 +void check_gotten_sum(void) 13.188 +{ 13.189 +#if 0 13.190 + uint64_t sum, ns; 13.191 + extern uint64_t total_ns_gotten(uint64_t*); 13.192 + double percent; 13.193 + int i; 13.194 + 13.195 + for (i=0; i<NCPU; i++) { 13.196 + new_qos = cpu_qos_data[i]; 13.197 + ns = billion; 13.198 + sum = total_ns_gotten(&ns); 13.199 + 13.200 + printf("[cpu%d] ns_gotten over all domains = %lldns, over %lldns\n", 13.201 + i, sum, ns); 13.202 + percent = (double) sum; 13.203 + percent = (100.0*percent) / (double)ns; 13.204 + printf(" ==> ns_gotten = %7.3f%%\n", percent); 13.205 + } 13.206 +#endif 13.207 +} 13.208 + 13.209 + 13.210 + 13.211 +void dump_stats(void) 13.212 +{ 13.213 + stat_map_t *smt = stat_map; 13.214 + time_t end_time, run_time; 13.215 + 13.216 + time(&end_time); 13.217 + 13.218 + run_time = end_time - start_time; 13.219 + 13.220 + printf("Event counts:\n"); 13.221 + while (smt->text != NULL) { 13.222 + printf("%08d\t%s\n", smt->event_count, smt->text); 13.223 + smt++; 13.224 + } 13.225 + 13.226 + printf("processed %d total records in %d seconds (%ld per second)\n", 13.227 + rec_count, (int)run_time, rec_count/run_time); 13.228 + 13.229 + check_gotten_sum(); 13.230 +} 13.231 + 13.232 +void log_event(int event_id) 13.233 +{ 13.234 + stat_map_t *smt = stat_map; 13.235 + 13.236 + // printf("event_id = 0x%x\n", event_id); 13.237 + 13.238 + while (smt->text != NULL) { 13.239 + if (smt->event_id == event_id) { 13.240 + smt->event_count++; 13.241 + return; 13.242 + } 13.243 + smt++; 13.244 + } 13.245 + if (smt->text == NULL) 13.246 + stat_map[0].event_count++; // other 13.247 +} 13.248 + 13.249 + 13.250 + 13.251 +/** 13.252 + * get_tbufs - get pointer to and size of the trace buffers 13.253 + * @mfn: location to store mfn of the trace buffers to 13.254 + * @size: location to store the size of a trace buffer to 13.255 + * 13.256 + * Gets the machine address of the trace pointer area and the size of the 13.257 + * per CPU buffers. 13.258 + */ 13.259 +void get_tbufs(unsigned long *mfn, unsigned long *size) 13.260 +{ 13.261 + int ret; 13.262 + dom0_op_t op; /* dom0 op we'll build */ 13.263 + int xc_handle = xc_interface_open(); /* for accessing control interface */ 13.264 + 13.265 + op.cmd = DOM0_TBUFCONTROL; 13.266 + op.interface_version = DOM0_INTERFACE_VERSION; 13.267 + op.u.tbufcontrol.op = DOM0_TBUF_GET_INFO; 13.268 + 13.269 + ret = do_dom0_op(xc_handle, &op); 13.270 + 13.271 + xc_interface_close(xc_handle); 13.272 + 13.273 + if ( ret != 0 ) 13.274 + { 13.275 + PERROR("Failure to get trace buffer pointer from Xen"); 13.276 + exit(EXIT_FAILURE); 13.277 + } 13.278 + 13.279 + *mfn = op.u.tbufcontrol.buffer_mfn; 13.280 + *size = op.u.tbufcontrol.size; 13.281 +} 13.282 + 13.283 +/** 13.284 + * map_tbufs - memory map Xen trace buffers into user space 13.285 + * @tbufs_mfn: mfn of the trace buffers 13.286 + * @num: number of trace buffers to map 13.287 + * @size: size of each trace buffer 13.288 + * 13.289 + * Maps the Xen trace buffers them into process address space. 13.290 + */ 13.291 +struct t_buf *map_tbufs(unsigned long tbufs_mfn, unsigned int num, 13.292 + unsigned long size) 13.293 +{ 13.294 + int xc_handle; /* file descriptor for /proc/xen/privcmd */ 13.295 + struct t_buf *tbufs_mapped; 13.296 + 13.297 + xc_handle = xc_interface_open(); 13.298 + 13.299 + if ( xc_handle < 0 ) 13.300 + { 13.301 + PERROR("Open /proc/xen/privcmd when mapping trace buffers\n"); 13.302 + exit(EXIT_FAILURE); 13.303 + } 13.304 + 13.305 + tbufs_mapped = xc_map_foreign_range(xc_handle, 0 /* Dom 0 ID */, 13.306 + size * num, PROT_READ | PROT_WRITE, 13.307 + tbufs_mfn); 13.308 + 13.309 + xc_interface_close(xc_handle); 13.310 + 13.311 + if ( tbufs_mapped == 0 ) 13.312 + { 13.313 + PERROR("Failed to mmap trace buffers"); 13.314 + exit(EXIT_FAILURE); 13.315 + } 13.316 + 13.317 + return tbufs_mapped; 13.318 +} 13.319 + 13.320 +/** 13.321 + * init_bufs_ptrs - initialises an array of pointers to the trace buffers 13.322 + * @bufs_mapped: the userspace address where the trace buffers are mapped 13.323 + * @num: number of trace buffers 13.324 + * @size: trace buffer size 13.325 + * 13.326 + * Initialises an array of pointers to individual trace buffers within the 13.327 + * mapped region containing all trace buffers. 13.328 + */ 13.329 +struct t_buf **init_bufs_ptrs(void *bufs_mapped, unsigned int num, 13.330 + unsigned long size) 13.331 +{ 13.332 + int i; 13.333 + struct t_buf **user_ptrs; 13.334 + 13.335 + user_ptrs = (struct t_buf **)calloc(num, sizeof(struct t_buf *)); 13.336 + if ( user_ptrs == NULL ) 13.337 + { 13.338 + PERROR( "Failed to allocate memory for buffer pointers\n"); 13.339 + exit(EXIT_FAILURE); 13.340 + } 13.341 + 13.342 + /* initialise pointers to the trace buffers - given the size of a trace 13.343 + * buffer and the value of bufs_maped, we can easily calculate these */ 13.344 + for ( i = 0; i<num; i++ ) 13.345 + user_ptrs[i] = (struct t_buf *)((unsigned long)bufs_mapped + size * i); 13.346 + 13.347 + return user_ptrs; 13.348 +} 13.349 + 13.350 + 13.351 +/** 13.352 + * init_rec_ptrs - initialises data area pointers to locations in user space 13.353 + * @tbufs_mfn: base mfn of the trace buffer area 13.354 + * @tbufs_mapped: user virtual address of base of trace buffer area 13.355 + * @meta: array of user-space pointers to struct t_buf's of metadata 13.356 + * @num: number of trace buffers 13.357 + * 13.358 + * Initialises data area pointers to the locations that data areas have been 13.359 + * mapped in user space. Note that the trace buffer metadata contains machine 13.360 + * pointers - the array returned allows more convenient access to them. 13.361 + */ 13.362 +struct t_rec **init_rec_ptrs(struct t_buf **meta, unsigned int num) 13.363 +{ 13.364 + int i; 13.365 + struct t_rec **data; 13.366 + 13.367 + data = calloc(num, sizeof(struct t_rec *)); 13.368 + if ( data == NULL ) 13.369 + { 13.370 + PERROR("Failed to allocate memory for data pointers\n"); 13.371 + exit(EXIT_FAILURE); 13.372 + } 13.373 + 13.374 + for ( i = 0; i < num; i++ ) 13.375 + data[i] = (struct t_rec *)(meta[i] + 1); 13.376 + 13.377 + return data; 13.378 +} 13.379 + 13.380 + 13.381 + 13.382 +/** 13.383 + * get_num_cpus - get the number of logical CPUs 13.384 + */ 13.385 +unsigned int get_num_cpus() 13.386 +{ 13.387 + dom0_op_t op; 13.388 + int xc_handle = xc_interface_open(); 13.389 + int ret; 13.390 + 13.391 + op.cmd = DOM0_PHYSINFO; 13.392 + op.interface_version = DOM0_INTERFACE_VERSION; 13.393 + 13.394 + ret = xc_dom0_op(xc_handle, &op); 13.395 + 13.396 + if ( ret != 0 ) 13.397 + { 13.398 + PERROR("Failure to get logical CPU count from Xen"); 13.399 + exit(EXIT_FAILURE); 13.400 + } 13.401 + 13.402 + xc_interface_close(xc_handle); 13.403 + opts.cpu_freq = (double)op.u.physinfo.cpu_khz/1000.0; 13.404 + 13.405 + return (op.u.physinfo.threads_per_core * 13.406 + op.u.physinfo.cores_per_socket * 13.407 + op.u.physinfo.sockets_per_node * 13.408 + op.u.physinfo.nr_nodes); 13.409 +} 13.410 + 13.411 + 13.412 +/** 13.413 + * monitor_tbufs - monitor the contents of tbufs 13.414 + */ 13.415 +int monitor_tbufs() 13.416 +{ 13.417 + int i; 13.418 + extern void process_record(int, struct t_rec *); 13.419 + extern void alloc_qos_data(int ncpu); 13.420 + 13.421 + void *tbufs_mapped; /* pointer to where the tbufs are mapped */ 13.422 + struct t_buf **meta; /* pointers to the trace buffer metadata */ 13.423 + struct t_rec **data; /* pointers to the trace buffer data areas 13.424 + * where they are mapped into user space. */ 13.425 + unsigned long tbufs_mfn; /* mfn of the tbufs */ 13.426 + unsigned int num; /* number of trace buffers / logical CPUS */ 13.427 + unsigned long size; /* size of a single trace buffer */ 13.428 + 13.429 + int size_in_recs; 13.430 + 13.431 + /* get number of logical CPUs (and therefore number of trace buffers) */ 13.432 + num = get_num_cpus(); 13.433 + 13.434 + init_current(num); 13.435 + alloc_qos_data(num); 13.436 + 13.437 + printf("CPU Frequency = %7.2f\n", opts.cpu_freq); 13.438 + 13.439 + /* setup access to trace buffers */ 13.440 + get_tbufs(&tbufs_mfn, &size); 13.441 + 13.442 + // printf("from dom0op: %ld, t_buf: %d, t_rec: %d\n", 13.443 + // size, sizeof(struct t_buf), sizeof(struct t_rec)); 13.444 + 13.445 + tbufs_mapped = map_tbufs(tbufs_mfn, num, size); 13.446 + 13.447 + size_in_recs = (size - sizeof(struct t_buf)) / sizeof(struct t_rec); 13.448 + // fprintf(stderr, "size_in_recs = %d\n", size_in_recs); 13.449 + 13.450 + /* build arrays of convenience ptrs */ 13.451 + meta = init_bufs_ptrs (tbufs_mapped, num, size); 13.452 + data = init_rec_ptrs(meta, num); 13.453 + 13.454 + /* now, scan buffers for events */ 13.455 + while ( !interrupted ) 13.456 + { 13.457 + for ( i = 0; ( i < num ) && !interrupted; i++ ) 13.458 + while ( meta[i]->cons != meta[i]->prod ) 13.459 + { 13.460 + rmb(); /* read prod, then read item. */ 13.461 + process_record(i, data[i] + meta[i]->cons % size_in_recs); 13.462 + mb(); /* read item, then update cons. */ 13.463 + meta[i]->cons++; 13.464 + } 13.465 + 13.466 + nanosleep(&opts.poll_sleep, NULL); 13.467 + } 13.468 + 13.469 + /* cleanup */ 13.470 + free(meta); 13.471 + free(data); 13.472 + /* don't need to munmap - cleanup is automatic */ 13.473 + 13.474 + return 0; 13.475 +} 13.476 + 13.477 + 13.478 +/****************************************************************************** 13.479 + * Various declarations / definitions GNU argp needs to do its work 13.480 + *****************************************************************************/ 13.481 + 13.482 + 13.483 +/* command parser for GNU argp - see GNU docs for more info */ 13.484 +error_t cmd_parser(int key, char *arg, struct argp_state *state) 13.485 +{ 13.486 + settings_t *setup = (settings_t *)state->input; 13.487 + 13.488 + switch ( key ) 13.489 + { 13.490 + case 't': /* set new records threshold for logging */ 13.491 + { 13.492 + char *inval; 13.493 + setup->new_data_thresh = strtol(arg, &inval, 0); 13.494 + if ( inval == arg ) 13.495 + argp_usage(state); 13.496 + } 13.497 + break; 13.498 + 13.499 + case 's': /* set sleep time (given in milliseconds) */ 13.500 + { 13.501 + char *inval; 13.502 + setup->poll_sleep = millis_to_timespec(strtol(arg, &inval, 0)); 13.503 + if ( inval == arg ) 13.504 + argp_usage(state); 13.505 + } 13.506 + break; 13.507 + 13.508 + case 'm': /* set ms_per_sample */ 13.509 + { 13.510 + char *inval; 13.511 + setup->ms_per_sample = strtol(arg, &inval, 0); 13.512 + if ( inval == arg ) 13.513 + argp_usage(state); 13.514 + } 13.515 + break; 13.516 + 13.517 + case ARGP_KEY_ARG: 13.518 + { 13.519 + if ( state->arg_num == 0 ) 13.520 + setup->outfile = arg; 13.521 + else 13.522 + argp_usage(state); 13.523 + } 13.524 + break; 13.525 + 13.526 + default: 13.527 + return ARGP_ERR_UNKNOWN; 13.528 + } 13.529 + 13.530 + return 0; 13.531 +} 13.532 + 13.533 +#define SHARED_MEM_FILE "/tmp/xenq-shm" 13.534 +void alloc_qos_data(int ncpu) 13.535 +{ 13.536 + int i, n, pgsize, off=0; 13.537 + char *dummy; 13.538 + int qos_fd; 13.539 + void advance_next_datapoint(uint64_t); 13.540 + 13.541 + cpu_qos_data = (_new_qos_data **) calloc(ncpu, sizeof(_new_qos_data *)); 13.542 + 13.543 + 13.544 + qos_fd = open(SHARED_MEM_FILE, O_RDWR|O_CREAT|O_TRUNC, 0777); 13.545 + if (qos_fd < 0) { 13.546 + PERROR(SHARED_MEM_FILE); 13.547 + exit(2); 13.548 + } 13.549 + pgsize = getpagesize(); 13.550 + dummy = malloc(pgsize); 13.551 + 13.552 + for (n=0; n<ncpu; n++) { 13.553 + 13.554 + for (i=0; i<sizeof(_new_qos_data); i=i+pgsize) 13.555 + write(qos_fd, dummy, pgsize); 13.556 + 13.557 + new_qos = (_new_qos_data *) mmap(0, sizeof(_new_qos_data), PROT_READ|PROT_WRITE, 13.558 + MAP_SHARED, qos_fd, off); 13.559 + off += i; 13.560 + if (new_qos == NULL) { 13.561 + PERROR("mmap"); 13.562 + exit(3); 13.563 + } 13.564 + // printf("new_qos = %p\n", new_qos); 13.565 + memset(new_qos, 0, sizeof(_new_qos_data)); 13.566 + new_qos->next_datapoint = 0; 13.567 + advance_next_datapoint(0); 13.568 + new_qos->structlen = i; 13.569 + new_qos->ncpu = ncpu; 13.570 + // printf("structlen = 0x%x\n", i); 13.571 + cpu_qos_data[n] = new_qos; 13.572 + } 13.573 + free(dummy); 13.574 + new_qos = NULL; 13.575 +} 13.576 + 13.577 + 13.578 +#define xstr(x) str(x) 13.579 +#define str(x) #x 13.580 + 13.581 +const struct argp_option cmd_opts[] = 13.582 +{ 13.583 + { .name = "log-thresh", .key='t', .arg="l", 13.584 + .doc = 13.585 + "Set number, l, of new records required to trigger a write to output " 13.586 + "(default " xstr(NEW_DATA_THRESH) ")." }, 13.587 + 13.588 + { .name = "poll-sleep", .key='s', .arg="p", 13.589 + .doc = 13.590 + "Set sleep time, p, in milliseconds between polling the trace buffer " 13.591 + "for new data (default " xstr(POLL_SLEEP_MILLIS) ")." }, 13.592 + 13.593 + { .name = "ms_per_sample", .key='m', .arg="MS", 13.594 + .doc = 13.595 + "Specify the number of milliseconds per sample " 13.596 + " (default " xstr(MS_PER_SAMPLE) ")." }, 13.597 + 13.598 + {0} 13.599 +}; 13.600 + 13.601 +const struct argp parser_def = 13.602 +{ 13.603 + .options = cmd_opts, 13.604 + .parser = cmd_parser, 13.605 + // .args_doc = "[output file]", 13.606 + .doc = 13.607 + "Tool to capture and partially process Xen trace buffer data" 13.608 + "\v" 13.609 + "This tool is used to capture trace buffer data from Xen. The data is " 13.610 + "saved in a shared memory structure to be further processed by xenmon." 13.611 +}; 13.612 + 13.613 + 13.614 +const char *argp_program_version = "xenbaked v1.3"; 13.615 +const char *argp_program_bug_address = "<rob.gardner@hp.com>"; 13.616 + 13.617 + 13.618 +int main(int argc, char **argv) 13.619 +{ 13.620 + int ret; 13.621 + struct sigaction act; 13.622 + 13.623 + time(&start_time); 13.624 + opts.outfile = 0; 13.625 + opts.poll_sleep = millis_to_timespec(POLL_SLEEP_MILLIS); 13.626 + opts.new_data_thresh = NEW_DATA_THRESH; 13.627 + opts.ms_per_sample = MS_PER_SAMPLE; 13.628 + opts.cpu_freq = CPU_FREQ; 13.629 + 13.630 + argp_parse(&parser_def, argc, argv, 0, 0, &opts); 13.631 + fprintf(stderr, "ms_per_sample = %ld\n", opts.ms_per_sample); 13.632 + 13.633 + 13.634 + /* ensure that if we get a signal, we'll do cleanup, then exit */ 13.635 + act.sa_handler = close_handler; 13.636 + act.sa_flags = 0; 13.637 + sigemptyset(&act.sa_mask); 13.638 + sigaction(SIGHUP, &act, NULL); 13.639 + sigaction(SIGTERM, &act, NULL); 13.640 + sigaction(SIGINT, &act, NULL); 13.641 + 13.642 + ret = monitor_tbufs(); 13.643 + 13.644 + dump_stats(); 13.645 + msync(new_qos, sizeof(_new_qos_data), MS_SYNC); 13.646 + 13.647 + return ret; 13.648 +} 13.649 + 13.650 +int domain_runnable(int domid) 13.651 +{ 13.652 + return new_qos->domain_info[ID(domid)].runnable; 13.653 +} 13.654 + 13.655 + 13.656 +void update_blocked_time(int domid, uint64_t now) 13.657 +{ 13.658 + uint64_t t_blocked; 13.659 + int id = ID(domid); 13.660 + 13.661 + if (new_qos->domain_info[id].blocked_start_time != 0) { 13.662 + if (now >= new_qos->domain_info[id].blocked_start_time) 13.663 + t_blocked = now - new_qos->domain_info[id].blocked_start_time; 13.664 + else 13.665 + t_blocked = now + (~0ULL - new_qos->domain_info[id].blocked_start_time); 13.666 + new_qos->qdata[new_qos->next_datapoint].ns_blocked[id] += t_blocked; 13.667 + } 13.668 + 13.669 + if (domain_runnable(id)) 13.670 + new_qos->domain_info[id].blocked_start_time = 0; 13.671 + else 13.672 + new_qos->domain_info[id].blocked_start_time = now; 13.673 +} 13.674 + 13.675 + 13.676 +// advance to next datapoint for all domains 13.677 +void advance_next_datapoint(uint64_t now) 13.678 +{ 13.679 + int new, old, didx; 13.680 + 13.681 + old = new_qos->next_datapoint; 13.682 + new = QOS_INCR(old); 13.683 + new_qos->next_datapoint = new; 13.684 + // memset(&new_qos->qdata[new], 0, sizeof(uint64_t)*(2+5*NDOMAINS)); 13.685 + for (didx = 0; didx < NDOMAINS; didx++) { 13.686 + new_qos->qdata[new].ns_gotten[didx] = 0; 13.687 + new_qos->qdata[new].ns_allocated[didx] = 0; 13.688 + new_qos->qdata[new].ns_waiting[didx] = 0; 13.689 + new_qos->qdata[new].ns_blocked[didx] = 0; 13.690 + new_qos->qdata[new].switchin_count[didx] = 0; 13.691 + new_qos->qdata[new].io_count[didx] = 0; 13.692 + } 13.693 + new_qos->qdata[new].ns_passed = 0; 13.694 + new_qos->qdata[new].lost_records = 0; 13.695 + new_qos->qdata[new].flip_free_periods = 0; 13.696 + 13.697 + new_qos->qdata[new].timestamp = now; 13.698 +} 13.699 + 13.700 + 13.701 + 13.702 +void qos_update_thread(int cpu, int domid, uint64_t now) 13.703 +{ 13.704 + int n, id; 13.705 + uint64_t last_update_time, start; 13.706 + int64_t time_since_update, run_time = 0; 13.707 + 13.708 + id = ID(domid); 13.709 + 13.710 + n = new_qos->next_datapoint; 13.711 + last_update_time = new_qos->domain_info[id].last_update_time; 13.712 + 13.713 + time_since_update = now - last_update_time; 13.714 + 13.715 + if (time_since_update < 0) { 13.716 + // what happened here? either a timestamp wraparound, or more likely, 13.717 + // a slight inconsistency among timestamps from various cpu's 13.718 + if (-time_since_update < billion) { 13.719 + // fairly small difference, let's just adjust 'now' to be a little 13.720 + // beyond last_update_time 13.721 + time_since_update = -time_since_update; 13.722 + } 13.723 + else if ( ((~0ULL - last_update_time) < billion) && (now < billion) ) { 13.724 + // difference is huge, must be a wraparound 13.725 + // last_update time should be "near" ~0ULL, 13.726 + // and now should be "near" 0 13.727 + time_since_update = now + (~0ULL - last_update_time); 13.728 + printf("time wraparound\n"); 13.729 + } 13.730 + else { 13.731 + // none of the above, may be an out of order record 13.732 + // no good solution, just ignore and update again later 13.733 + return; 13.734 + } 13.735 + } 13.736 + 13.737 + new_qos->domain_info[id].last_update_time = now; 13.738 + 13.739 + if (new_qos->domain_info[id].runnable_at_last_update && is_current(domid, cpu)) { 13.740 + start = new_qos->domain_info[id].start_time; 13.741 + if (start > now) { // wrapped around 13.742 + run_time = now + (~0ULL - start); 13.743 + printf("warning: start > now\n"); 13.744 + } 13.745 + else 13.746 + run_time = now - start; 13.747 + // if (run_time < 0) // should not happen 13.748 + // printf("warning: run_time < 0; start = %lld now= %lld\n", start, now); 13.749 + new_qos->domain_info[id].ns_oncpu_since_boot += run_time; 13.750 + new_qos->domain_info[id].start_time = now; 13.751 + new_qos->domain_info[id].ns_since_boot += time_since_update; 13.752 +#if 1 13.753 + new_qos->qdata[n].ns_gotten[id] += run_time; 13.754 + if (domid == 0 && cpu == 1) 13.755 + printf("adding run time for dom0 on cpu1\r\n"); 13.756 +#endif 13.757 + } 13.758 + 13.759 + new_qos->domain_info[id].runnable_at_last_update = domain_runnable(domid); 13.760 + 13.761 + update_blocked_time(domid, now); 13.762 + 13.763 + // how much time passed since this datapoint was updated? 13.764 + if (now >= new_qos->qdata[n].timestamp) { 13.765 + // all is right with the world, time is increasing 13.766 + new_qos->qdata[n].ns_passed += (now - new_qos->qdata[n].timestamp); 13.767 + } 13.768 + else { 13.769 + // time wrapped around 13.770 + //new_qos->qdata[n].ns_passed += (now + (~0LL - new_qos->qdata[n].timestamp)); 13.771 + // printf("why timewrap?\r\n"); 13.772 + } 13.773 + new_qos->qdata[n].timestamp = now; 13.774 +} 13.775 + 13.776 + 13.777 +// called by dump routines to update all structures 13.778 +void qos_update_all(uint64_t now, int cpu) 13.779 +{ 13.780 + int i; 13.781 + 13.782 + for (i=0; i<NDOMAINS; i++) 13.783 + if (new_qos->domain_info[i].in_use) 13.784 + qos_update_thread(cpu, i, now); 13.785 +} 13.786 + 13.787 + 13.788 +void qos_update_thread_stats(int cpu, int domid, uint64_t now) 13.789 +{ 13.790 + if (new_qos->qdata[new_qos->next_datapoint].ns_passed > (million*opts.ms_per_sample)) { 13.791 + qos_update_all(now, cpu); 13.792 + advance_next_datapoint(now); 13.793 + return; 13.794 + } 13.795 + qos_update_thread(cpu, domid, now); 13.796 +} 13.797 + 13.798 + 13.799 +void qos_init_domain(int cpu, int domid, uint64_t now) 13.800 +{ 13.801 + int i, id; 13.802 + 13.803 + id = ID(domid); 13.804 + 13.805 + if (new_qos->domain_info[id].in_use) 13.806 + return; 13.807 + 13.808 + 13.809 + memset(&new_qos->domain_info[id], 0, sizeof(_domain_info)); 13.810 + new_qos->domain_info[id].last_update_time = now; 13.811 + // runnable_start_time[id] = 0; 13.812 + new_qos->domain_info[id].runnable_start_time = 0; // invalidate 13.813 + new_qos->domain_info[id].in_use = 1; 13.814 + new_qos->domain_info[id].blocked_start_time = 0; 13.815 + new_qos->domain_info[id].id = id; 13.816 + if (domid == IDLE_DOMAIN_ID) 13.817 + sprintf(new_qos->domain_info[id].name, "Idle Task%d", cpu); 13.818 + else 13.819 + sprintf(new_qos->domain_info[id].name, "Domain#%d", domid); 13.820 + 13.821 + for (i=0; i<NSAMPLES; i++) { 13.822 + new_qos->qdata[i].ns_gotten[id] = 0; 13.823 + new_qos->qdata[i].ns_allocated[id] = 0; 13.824 + new_qos->qdata[i].ns_waiting[id] = 0; 13.825 + new_qos->qdata[i].ns_blocked[id] = 0; 13.826 + new_qos->qdata[i].switchin_count[id] = 0; 13.827 + new_qos->qdata[i].io_count[id] = 0; 13.828 + } 13.829 +} 13.830 + 13.831 + 13.832 +// called when a new thread gets the cpu 13.833 +void qos_switch_in(int cpu, int domid, uint64_t now, unsigned long ns_alloc, unsigned long ns_waited) 13.834 +{ 13.835 + int id = ID(domid); 13.836 + 13.837 + new_qos->domain_info[id].runnable = 1; 13.838 + update_blocked_time(domid, now); 13.839 + new_qos->domain_info[id].blocked_start_time = 0; // invalidate 13.840 + new_qos->domain_info[id].runnable_start_time = 0; // invalidate 13.841 + //runnable_start_time[id] = 0; 13.842 + 13.843 + new_qos->domain_info[id].start_time = now; 13.844 + new_qos->qdata[new_qos->next_datapoint].switchin_count[id]++; 13.845 + new_qos->qdata[new_qos->next_datapoint].ns_allocated[id] += ns_alloc; 13.846 + new_qos->qdata[new_qos->next_datapoint].ns_waiting[id] += ns_waited; 13.847 + qos_update_thread_stats(cpu, domid, now); 13.848 + set_current(cpu, id); 13.849 + 13.850 + // count up page flips for dom0 execution 13.851 + if (id == 0) 13.852 + dom0_flips = 0; 13.853 +} 13.854 + 13.855 +// called when the current thread is taken off the cpu 13.856 +void qos_switch_out(int cpu, int domid, uint64_t now, unsigned long gotten) 13.857 +{ 13.858 + int id = ID(domid); 13.859 + int n; 13.860 + 13.861 + if (!is_current(id, cpu)) { 13.862 + // printf("switching out domain %d but it is not current. gotten=%ld\r\n", id, gotten); 13.863 + } 13.864 + 13.865 + if (gotten == 0) { 13.866 + printf("gotten==0 in qos_switchout(domid=%d)\n", domid); 13.867 + } 13.868 + 13.869 + if (gotten < 100) { 13.870 + printf("gotten<100ns in qos_switchout(domid=%d)\n", domid); 13.871 + } 13.872 + 13.873 + 13.874 + n = new_qos->next_datapoint; 13.875 +#if 0 13.876 + new_qos->qdata[n].ns_gotten[id] += gotten; 13.877 + if (gotten > new_qos->qdata[n].ns_passed) 13.878 + printf("inconsistency #257, diff = %lld\n", 13.879 + gotten - new_qos->qdata[n].ns_passed ); 13.880 +#endif 13.881 + new_qos->domain_info[id].ns_oncpu_since_boot += gotten; 13.882 + new_qos->domain_info[id].runnable_start_time = now; 13.883 + // runnable_start_time[id] = now; 13.884 + qos_update_thread_stats(cpu, id, now); 13.885 + 13.886 + // process dom0 page flips 13.887 + if (id == 0) 13.888 + if (dom0_flips == 0) 13.889 + new_qos->qdata[n].flip_free_periods++; 13.890 +} 13.891 + 13.892 +// called when domain is put to sleep, may also be called 13.893 +// when thread is already asleep 13.894 +void qos_state_sleeping(int cpu, int domid, uint64_t now) 13.895 +{ 13.896 + int id = ID(domid); 13.897 + 13.898 + if (!domain_runnable(id)) // double call? 13.899 + return; 13.900 + 13.901 + new_qos->domain_info[id].runnable = 0; 13.902 + new_qos->domain_info[id].blocked_start_time = now; 13.903 + new_qos->domain_info[id].runnable_start_time = 0; // invalidate 13.904 + // runnable_start_time[id] = 0; // invalidate 13.905 + qos_update_thread_stats(cpu, domid, now); 13.906 +} 13.907 + 13.908 + 13.909 + 13.910 +void qos_kill_thread(int domid) 13.911 +{ 13.912 + new_qos->domain_info[ID(domid)].in_use = 0; 13.913 +} 13.914 + 13.915 + 13.916 +// called when thread becomes runnable, may also be called 13.917 +// when thread is already runnable 13.918 +void qos_state_runnable(int cpu, int domid, uint64_t now) 13.919 +{ 13.920 + int id = ID(domid); 13.921 + 13.922 + if (domain_runnable(id)) // double call? 13.923 + return; 13.924 + new_qos->domain_info[id].runnable = 1; 13.925 + update_blocked_time(domid, now); 13.926 + 13.927 + qos_update_thread_stats(cpu, domid, now); 13.928 + 13.929 + new_qos->domain_info[id].blocked_start_time = 0; /* invalidate */ 13.930 + new_qos->domain_info[id].runnable_start_time = now; 13.931 + // runnable_start_time[id] = now; 13.932 +} 13.933 + 13.934 + 13.935 +void qos_count_packets(domid_t domid, uint64_t now) 13.936 +{ 13.937 + int i, id = ID(domid); 13.938 + _new_qos_data *cpu_data; 13.939 + 13.940 + for (i=0; i<NCPU; i++) { 13.941 + cpu_data = cpu_qos_data[i]; 13.942 + if (cpu_data->domain_info[id].in_use) { 13.943 + cpu_data->qdata[cpu_data->next_datapoint].io_count[id]++; 13.944 + } 13.945 + } 13.946 + 13.947 + new_qos->qdata[new_qos->next_datapoint].io_count[0]++; 13.948 + dom0_flips++; 13.949 +} 13.950 + 13.951 + 13.952 +int domain_ok(int cpu, int domid, uint64_t now) 13.953 +{ 13.954 + if (domid == IDLE_DOMAIN_ID) 13.955 + domid = NDOMAINS-1; 13.956 + if (domid < 0 || domid >= NDOMAINS) { 13.957 + printf("bad domain id: %d\n", domid); 13.958 + return 0; 13.959 + } 13.960 + if (new_qos->domain_info[domid].in_use == 0) 13.961 + qos_init_domain(cpu, domid, now); 13.962 + return 1; 13.963 +} 13.964 + 13.965 + 13.966 +void process_record(int cpu, struct t_rec *r) 13.967 +{ 13.968 + uint64_t now; 13.969 + 13.970 + 13.971 + new_qos = cpu_qos_data[cpu]; 13.972 + 13.973 + rec_count++; 13.974 + 13.975 + now = ((double)r->cycles) / (opts.cpu_freq / 1000.0); 13.976 + 13.977 + log_event(r->event); 13.978 + 13.979 + switch (r->event) { 13.980 + 13.981 + case TRC_SCHED_SWITCH_INFPREV: 13.982 + // domain data[0] just switched out and received data[1] ns of cpu time 13.983 + if (domain_ok(cpu, r->data[0], now)) 13.984 + qos_switch_out(cpu, r->data[0], now, r->data[1]); 13.985 + // printf("ns_gotten %ld\n", r->data[1]); 13.986 + break; 13.987 + 13.988 + case TRC_SCHED_SWITCH_INFNEXT: 13.989 + // domain data[0] just switched in and 13.990 + // waited data[1] ns, and was allocated data[2] ns of cpu time 13.991 + if (domain_ok(cpu, r->data[0], now)) 13.992 + qos_switch_in(cpu, r->data[0], now, r->data[2], r->data[1]); 13.993 + break; 13.994 + 13.995 + case TRC_SCHED_DOM_ADD: 13.996 + if (domain_ok(cpu, r->data[0], now)) 13.997 + qos_init_domain(cpu, r->data[0], now); 13.998 + break; 13.999 + 13.1000 + case TRC_SCHED_DOM_REM: 13.1001 + if (domain_ok(cpu, r->data[0], now)) 13.1002 + qos_kill_thread(r->data[0]); 13.1003 + break; 13.1004 + 13.1005 + case TRC_SCHED_SLEEP: 13.1006 + if (domain_ok(cpu, r->data[0], now)) 13.1007 + qos_state_sleeping(cpu, r->data[0], now); 13.1008 + break; 13.1009 + 13.1010 + case TRC_SCHED_WAKE: 13.1011 + if (domain_ok(cpu, r->data[0], now)) 13.1012 + qos_state_runnable(cpu, r->data[0], now); 13.1013 + break; 13.1014 + 13.1015 + case TRC_SCHED_BLOCK: 13.1016 + if (domain_ok(cpu, r->data[0], now)) 13.1017 + qos_state_sleeping(cpu, r->data[0], now); 13.1018 + break; 13.1019 + 13.1020 + case TRC_MEM_PAGE_GRANT_TRANSFER: 13.1021 + if (domain_ok(cpu, r->data[0], now)) 13.1022 + qos_count_packets(r->data[0], now); 13.1023 + break; 13.1024 + 13.1025 + default: 13.1026 + break; 13.1027 + } 13.1028 + new_qos = NULL; 13.1029 +} 13.1030 + 13.1031 + 13.1032 +
14.1 --- /dev/null Thu Jan 01 00:00:00 1970 +0000 14.2 +++ b/tools/xenmon/xenbaked.h Tue Nov 15 16:24:31 2005 +0100 14.3 @@ -0,0 +1,101 @@ 14.4 +/****************************************************************************** 14.5 + * tools/xenbaked.h 14.6 + * 14.7 + * Header file for xenbaked 14.8 + * 14.9 + * Copyright (C) 2005 by Hewlett Packard, Palo Alto and Fort Collins 14.10 + * 14.11 + * Authors: Diwaker Gupta, diwaker.gupta@hp.com 14.12 + * Rob Gardner, rob.gardner@hp.com 14.13 + * Lucy Cherkasova, lucy.cherkasova.hp.com 14.14 + * 14.15 + * This program is free software; you can redistribute it and/or modify 14.16 + * it under the terms of the GNU General Public License as published by 14.17 + * the Free Software Foundation; under version 2 of the License. 14.18 + * 14.19 + * This program is distributed in the hope that it will be useful, 14.20 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 14.21 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14.22 + * GNU General Public License for more details. 14.23 + * 14.24 + * You should have received a copy of the GNU General Public License 14.25 + * along with this program; if not, write to the Free Software 14.26 + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 14.27 + */ 14.28 + 14.29 +#ifndef __QOS_H__ 14.30 +#define __QOS_H__ 14.31 + 14.32 +///// qos stuff 14.33 +#define million 1000000LL 14.34 +#define billion 1000000000LL 14.35 + 14.36 +#define QOS_ADD(N,A) ((N+A)<(NSAMPLES-1) ? (N+A) : A) 14.37 +#define QOS_INCR(N) ((N<(NSAMPLES-2)) ? (N+1) : 0) 14.38 +#define QOS_DECR(N) ((N==0) ? (NSAMPLES-1) : (N-1)) 14.39 + 14.40 +#define MAX_NAME_SIZE 32 14.41 +#define IDLE_DOMAIN_ID 32767 14.42 + 14.43 +/* Number of domains we can keep track of in memory */ 14.44 +#define NDOMAINS 32 14.45 + 14.46 +/* Number of data points to keep */ 14.47 +#define NSAMPLES 100 14.48 + 14.49 + 14.50 +// per domain stuff 14.51 +typedef struct 14.52 +{ 14.53 + uint64_t last_update_time; 14.54 + uint64_t start_time; // when the thread started running 14.55 + uint64_t runnable_start_time; // when the thread became runnable 14.56 + uint64_t blocked_start_time; // when the thread became blocked 14.57 + uint64_t ns_since_boot; // time gone by since boot 14.58 + uint64_t ns_oncpu_since_boot; // total cpu time used by thread since boot 14.59 + // uint64_t ns_runnable_since_boot; 14.60 + int runnable_at_last_update; // true if the thread was runnable last time we checked. 14.61 + int runnable; // true if thread is runnable right now 14.62 + // tells us something about what happened during the 14.63 + // sample period that we are analysing right now 14.64 + int in_use; // 14.65 + domid_t id; 14.66 + char name[MAX_NAME_SIZE]; 14.67 +} _domain_info; 14.68 + 14.69 + 14.70 + 14.71 +typedef struct 14.72 +{ 14.73 + struct 14.74 + { 14.75 +// data point: 14.76 +// stuff that is recorded once for each measurement interval 14.77 + uint64_t ns_gotten[NDOMAINS]; // ns used in the last sample period 14.78 + uint64_t ns_allocated[NDOMAINS]; // ns allocated by scheduler 14.79 + uint64_t ns_waiting[NDOMAINS]; // ns spent waiting to execute, ie, time from 14.80 + // becoming runnable until actually running 14.81 + uint64_t ns_blocked[NDOMAINS]; // ns spent blocked 14.82 + uint64_t switchin_count[NDOMAINS]; // number of executions of the domain 14.83 + uint64_t io_count[NDOMAINS]; 14.84 + uint64_t ns_passed; // ns gone by on the wall clock, ie, the sample period 14.85 + uint64_t timestamp; 14.86 + uint64_t lost_records; // # of lost trace records this time period 14.87 + uint64_t flip_free_periods; // # of executions of dom0 in which no page flips happened 14.88 + } qdata[NSAMPLES]; 14.89 + 14.90 + _domain_info domain_info[NDOMAINS]; 14.91 + 14.92 + // control information 14.93 + int next_datapoint; 14.94 + int ncpu; 14.95 + int structlen; 14.96 + 14.97 + // parameters 14.98 + int measurement_frequency; // for example 14.99 + 14.100 +} _new_qos_data; 14.101 + 14.102 + 14.103 + 14.104 +#endif
15.1 --- /dev/null Thu Jan 01 00:00:00 1970 +0000 15.2 +++ b/tools/xenmon/xenmon.py Tue Nov 15 16:24:31 2005 +0100 15.3 @@ -0,0 +1,578 @@ 15.4 +#!/usr/bin/env python 15.5 + 15.6 +##################################################################### 15.7 +# xenmon is a front-end for xenbaked. 15.8 +# There is a curses interface for live monitoring. XenMon also allows 15.9 +# logging to a file. For options, run python xenmon.py -h 15.10 +# 15.11 +# Copyright (C) 2005 by Hewlett Packard, Palo Alto and Fort Collins 15.12 +# Authors: Lucy Cherkasova, lucy.cherkasova@hp.com 15.13 +# Rob Gardner, rob.gardner@hp.com 15.14 +# Diwaker Gupta, diwaker.gupta@hp.com 15.15 +##################################################################### 15.16 +# This program is free software; you can redistribute it and/or modify 15.17 +# it under the terms of the GNU General Public License as published by 15.18 +# the Free Software Foundation; under version 2 of the License. 15.19 +# 15.20 +# This program is distributed in the hope that it will be useful, 15.21 +# but WITHOUT ANY WARRANTY; without even the implied warranty of 15.22 +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15.23 +# GNU General Public License for more details. 15.24 +# 15.25 +# You should have received a copy of the GNU General Public License 15.26 +# along with this program; if not, write to the Free Software 15.27 +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 15.28 +##################################################################### 15.29 + 15.30 +import mmap 15.31 +import struct 15.32 +import os 15.33 +import time 15.34 +import optparse as _o 15.35 +import curses as _c 15.36 +import math 15.37 +import sys 15.38 + 15.39 +# constants 15.40 +NSAMPLES = 100 15.41 +NDOMAINS = 32 15.42 + 15.43 +# the struct strings for qos_info 15.44 +ST_DOM_INFO = "6Q4i32s" 15.45 +ST_QDATA = "%dQ" % (6*NDOMAINS + 4) 15.46 + 15.47 +# size of mmaped file 15.48 +QOS_DATA_SIZE = struct.calcsize(ST_QDATA)*NSAMPLES + struct.calcsize(ST_DOM_INFO)*NDOMAINS + struct.calcsize("4i") 15.49 + 15.50 +# location of mmaped file, hard coded right now 15.51 +SHM_FILE = "/tmp/xenq-shm" 15.52 + 15.53 +# format strings 15.54 +TOTALS = 15*' ' + "%6.2f%%" + 35*' ' + "%6.2f%%" 15.55 + 15.56 +ALLOCATED = "Allocated" 15.57 +GOTTEN = "Gotten" 15.58 +BLOCKED = "Blocked" 15.59 +WAITED = "Waited" 15.60 +IOCOUNT = "I/O Count" 15.61 +EXCOUNT = "Exec Count" 15.62 + 15.63 +# globals 15.64 +# our curses screen 15.65 +stdscr = None 15.66 + 15.67 +# parsed options 15.68 +options, args = None, None 15.69 + 15.70 +# the optparse module is quite smart 15.71 +# to see help, just run xenmon -h 15.72 +def setup_cmdline_parser(): 15.73 + parser = _o.OptionParser() 15.74 + parser.add_option("-l", "--live", dest="live", action="store_true", 15.75 + default=True, help = "show the ncurses live monitoring frontend (default)") 15.76 + parser.add_option("-n", "--notlive", dest="live", action="store_false", 15.77 + default="True", help = "write to file instead of live monitoring") 15.78 + parser.add_option("-p", "--prefix", dest="prefix", 15.79 + default = "log", help="prefix to use for output files") 15.80 + parser.add_option("-t", "--time", dest="duration", 15.81 + action="store", type="int", default=10, 15.82 + help="stop logging to file after this much time has elapsed (in seconds). set to 0 to keep logging indefinitely") 15.83 + parser.add_option("-i", "--interval", dest="interval", 15.84 + action="store", type="int", default=1000, 15.85 + help="interval for logging (in ms)") 15.86 + parser.add_option("--ms_per_sample", dest="mspersample", 15.87 + action="store", type="int", default=100, 15.88 + help = "determines how many ms worth of data goes in a sample") 15.89 + return parser 15.90 + 15.91 +# encapsulate information about a domain 15.92 +class DomainInfo: 15.93 + def __init__(self): 15.94 + self.allocated_samples = [] 15.95 + self.gotten_samples = [] 15.96 + self.blocked_samples = [] 15.97 + self.waited_samples = [] 15.98 + self.execcount_samples = [] 15.99 + self.iocount_samples = [] 15.100 + self.ffp_samples = [] 15.101 + 15.102 + def gotten_stats(self, passed): 15.103 + total = float(sum(self.gotten_samples)) 15.104 + per = 100*total/passed 15.105 + exs = sum(self.execcount_samples) 15.106 + if exs > 0: 15.107 + avg = total/exs 15.108 + else: 15.109 + avg = 0 15.110 + return [total/(float(passed)/10**9), per, avg] 15.111 + 15.112 + def waited_stats(self, passed): 15.113 + total = float(sum(self.waited_samples)) 15.114 + per = 100*total/passed 15.115 + exs = sum(self.execcount_samples) 15.116 + if exs > 0: 15.117 + avg = total/exs 15.118 + else: 15.119 + avg = 0 15.120 + return [total/(float(passed)/10**9), per, avg] 15.121 + 15.122 + def blocked_stats(self, passed): 15.123 + total = float(sum(self.blocked_samples)) 15.124 + per = 100*total/passed 15.125 + ios = sum(self.iocount_samples) 15.126 + if ios > 0: 15.127 + avg = total/float(ios) 15.128 + else: 15.129 + avg = 0 15.130 + return [total/(float(passed)/10**9), per, avg] 15.131 + 15.132 + def allocated_stats(self, passed): 15.133 + total = sum(self.allocated_samples) 15.134 + exs = sum(self.execcount_samples) 15.135 + if exs > 0: 15.136 + return float(total)/exs 15.137 + else: 15.138 + return 0 15.139 + 15.140 + def ec_stats(self, passed): 15.141 + total = float(sum(self.execcount_samples))/(float(passed)/10**9) 15.142 + return total 15.143 + 15.144 + def io_stats(self, passed): 15.145 + total = float(sum(self.iocount_samples)) 15.146 + exs = sum(self.execcount_samples) 15.147 + if exs > 0: 15.148 + avg = total/exs 15.149 + else: 15.150 + avg = 0 15.151 + return [total/(float(passed)/10**9), avg] 15.152 + 15.153 + def stats(self, passed): 15.154 + return [self.gotten_stats(passed), self.allocated_stats(passed), self.blocked_stats(passed), 15.155 + self.waited_stats(passed), self.ec_stats(passed), self.io_stats(passed)] 15.156 + 15.157 +# report values over desired interval 15.158 +def summarize(startat, endat, duration, samples): 15.159 + dominfos = {} 15.160 + for i in range(0, NDOMAINS): 15.161 + dominfos[i] = DomainInfo() 15.162 + 15.163 + passed = 1 # to prevent zero division 15.164 + curid = startat 15.165 + numbuckets = 0 15.166 + lost_samples = [] 15.167 + ffp_samples = [] 15.168 + 15.169 + while passed < duration: 15.170 + for i in range(0, NDOMAINS): 15.171 + dominfos[i].gotten_samples.append(samples[curid][0*NDOMAINS + i]) 15.172 + dominfos[i].allocated_samples.append(samples[curid][1*NDOMAINS + i]) 15.173 + dominfos[i].waited_samples.append(samples[curid][2*NDOMAINS + i]) 15.174 + dominfos[i].blocked_samples.append(samples[curid][3*NDOMAINS + i]) 15.175 + dominfos[i].execcount_samples.append(samples[curid][4*NDOMAINS + i]) 15.176 + dominfos[i].iocount_samples.append(samples[curid][5*NDOMAINS + i]) 15.177 + 15.178 + passed += samples[curid][6*NDOMAINS] 15.179 + lost_samples.append(samples[curid][6*NDOMAINS + 2]) 15.180 + ffp_samples.append(samples[curid][6*NDOMAINS + 3]) 15.181 + 15.182 + numbuckets += 1 15.183 + 15.184 + if curid > 0: 15.185 + curid -= 1 15.186 + else: 15.187 + curid = NSAMPLES - 1 15.188 + if curid == endat: 15.189 + break 15.190 + 15.191 + lostinfo = [min(lost_samples), sum(lost_samples), max(lost_samples)] 15.192 + ffpinfo = [min(ffp_samples), sum(ffp_samples), max(ffp_samples)] 15.193 + ldoms = map(lambda x: dominfos[x].stats(passed), range(0, NDOMAINS)) 15.194 + 15.195 + return [ldoms, lostinfo, ffpinfo] 15.196 + 15.197 +# scale microseconds to milliseconds or seconds as necessary 15.198 +def time_scale(ns): 15.199 + if ns < 1000: 15.200 + return "%4.2f ns" % float(ns) 15.201 + elif ns < 1000*1000: 15.202 + return "%4.2f us" % (float(ns)/10**3) 15.203 + elif ns < 10**9: 15.204 + return "%4.2f ms" % (float(ns)/10**6) 15.205 + else: 15.206 + return "%4.2f s" % (float(ns)/10**9) 15.207 + 15.208 +# paint message on curses screen, but detect screen size errors 15.209 +def display(scr, row, col, str, attr=0): 15.210 + try: 15.211 + scr.addstr(row, col, str, attr) 15.212 + except: 15.213 + scr.erase() 15.214 + _c.nocbreak() 15.215 + scr.keypad(0) 15.216 + _c.echo() 15.217 + _c.endwin() 15.218 + print "Your terminal screen is not big enough; Please resize it." 15.219 + print "row=%d, col=%d, str='%s'" % (row, col, str) 15.220 + sys.exit(1) 15.221 + 15.222 + 15.223 +# the live monitoring code 15.224 +def show_livestats(): 15.225 + cpu = 0 # cpu of interest to display data for 15.226 + ncpu = 1 # number of cpu's on this platform 15.227 + slen = 0 # size of shared data structure, incuding padding 15.228 + 15.229 + # mmap the (the first chunk of the) file 15.230 + shmf = open(SHM_FILE, "r+") 15.231 + shm = mmap.mmap(shmf.fileno(), QOS_DATA_SIZE) 15.232 + 15.233 + samples = [] 15.234 + doms = [] 15.235 + 15.236 + # initialize curses 15.237 + stdscr = _c.initscr() 15.238 + _c.noecho() 15.239 + _c.cbreak() 15.240 + 15.241 + stdscr.keypad(1) 15.242 + stdscr.timeout(1000) 15.243 + [maxy, maxx] = stdscr.getmaxyx() 15.244 + 15.245 + 15.246 + 15.247 + # display in a loop 15.248 + while True: 15.249 + 15.250 + for cpuidx in range(0, ncpu): 15.251 + 15.252 + # calculate offset in mmap file to start from 15.253 + idx = cpuidx * slen 15.254 + 15.255 + 15.256 + samples = [] 15.257 + doms = [] 15.258 + 15.259 + # read in data 15.260 + for i in range(0, NSAMPLES): 15.261 + len = struct.calcsize(ST_QDATA) 15.262 + sample = struct.unpack(ST_QDATA, shm[idx:idx+len]) 15.263 + samples.append(sample) 15.264 + idx += len 15.265 + 15.266 + for i in range(0, NDOMAINS): 15.267 + len = struct.calcsize(ST_DOM_INFO) 15.268 + dom = struct.unpack(ST_DOM_INFO, shm[idx:idx+len]) 15.269 + doms.append(dom) 15.270 + idx += len 15.271 + 15.272 + len = struct.calcsize("4i") 15.273 + oldncpu = ncpu 15.274 + (next, ncpu, slen, freq) = struct.unpack("4i", shm[idx:idx+len]) 15.275 + idx += len 15.276 + 15.277 + # xenbaked tells us how many cpu's it's got, so re-do 15.278 + # the mmap if necessary to get multiple cpu data 15.279 + if oldncpu != ncpu: 15.280 + shm = mmap.mmap(shmf.fileno(), ncpu*slen) 15.281 + 15.282 + # if we've just calculated data for the cpu of interest, then 15.283 + # stop examining mmap data and start displaying stuff 15.284 + if cpuidx == cpu: 15.285 + break 15.286 + 15.287 + # calculate starting and ending datapoints; never look at "next" since 15.288 + # it represents live data that may be in transition. 15.289 + startat = next - 1 15.290 + if next + 10 < NSAMPLES: 15.291 + endat = next + 10 15.292 + else: 15.293 + endat = 10 15.294 + 15.295 + # get summary over desired interval 15.296 + [h1, l1, f1] = summarize(startat, endat, 10**9, samples) 15.297 + [h2, l2, f2] = summarize(startat, endat, 10 * 10**9, samples) 15.298 + 15.299 + # the actual display code 15.300 + row = 0 15.301 + display(stdscr, row, 1, "CPU = %d" % cpu, _c.A_STANDOUT) 15.302 + 15.303 + display(stdscr, row, 10, "%sLast 10 seconds%sLast 1 second" % (6*' ', 30*' '), _c.A_BOLD) 15.304 + row +=1 15.305 + display(stdscr, row, 1, "%s" % ((maxx-2)*'=')) 15.306 + 15.307 + total_h1_cpu = 0 15.308 + total_h2_cpu = 0 15.309 + 15.310 + for dom in range(0, NDOMAINS): 15.311 + if h1[dom][0][1] > 0 or dom == NDOMAINS - 1: 15.312 + # display gotten 15.313 + row += 1 15.314 + col = 2 15.315 + display(stdscr, row, col, "%d" % dom) 15.316 + col += 4 15.317 + display(stdscr, row, col, "%s" % time_scale(h2[dom][0][0])) 15.318 + col += 12 15.319 + display(stdscr, row, col, "%3.2f%%" % h2[dom][0][1]) 15.320 + col += 12 15.321 + display(stdscr, row, col, "%s/ex" % time_scale(h2[dom][0][2])) 15.322 + col += 18 15.323 + display(stdscr, row, col, "%s" % time_scale(h1[dom][0][0])) 15.324 + col += 12 15.325 + display(stdscr, row, col, "%3.2f%%" % h1[dom][0][1]) 15.326 + col += 12 15.327 + display(stdscr, row, col, "%s/ex" % time_scale(h1[dom][0][2])) 15.328 + col += 18 15.329 + display(stdscr, row, col, "Gotten") 15.330 + 15.331 + # display allocated 15.332 + row += 1 15.333 + col = 2 15.334 + display(stdscr, row, col, "%d" % dom) 15.335 + col += 28 15.336 + display(stdscr, row, col, "%s/ex" % time_scale(h2[dom][1])) 15.337 + col += 42 15.338 + display(stdscr, row, col, "%s/ex" % time_scale(h1[dom][1])) 15.339 + col += 18 15.340 + display(stdscr, row, col, "Allocated") 15.341 + 15.342 + # display blocked 15.343 + row += 1 15.344 + col = 2 15.345 + display(stdscr, row, col, "%d" % dom) 15.346 + col += 4 15.347 + display(stdscr, row, col, "%s" % time_scale(h2[dom][2][0])) 15.348 + col += 12 15.349 + display(stdscr, row, col, "%3.2f%%" % h2[dom][2][1]) 15.350 + col += 12 15.351 + display(stdscr, row, col, "%s/io" % time_scale(h2[dom][2][2])) 15.352 + col += 18 15.353 + display(stdscr, row, col, "%s" % time_scale(h1[dom][2][0])) 15.354 + col += 12 15.355 + display(stdscr, row, col, "%3.2f%%" % h1[dom][2][1]) 15.356 + col += 12 15.357 + display(stdscr, row, col, "%s/io" % time_scale(h1[dom][2][2])) 15.358 + col += 18 15.359 + display(stdscr, row, col, "Blocked") 15.360 + 15.361 + # display waited 15.362 + row += 1 15.363 + col = 2 15.364 + display(stdscr, row, col, "%d" % dom) 15.365 + col += 4 15.366 + display(stdscr, row, col, "%s" % time_scale(h2[dom][3][0])) 15.367 + col += 12 15.368 + display(stdscr, row, col, "%3.2f%%" % h2[dom][3][1]) 15.369 + col += 12 15.370 + display(stdscr, row, col, "%s/ex" % time_scale(h2[dom][3][2])) 15.371 + col += 18 15.372 + display(stdscr, row, col, "%s" % time_scale(h1[dom][3][0])) 15.373 + col += 12 15.374 + display(stdscr, row, col, "%3.2f%%" % h1[dom][3][1]) 15.375 + col += 12 15.376 + display(stdscr, row, col, "%s/ex" % time_scale(h1[dom][3][2])) 15.377 + col += 18 15.378 + display(stdscr, row, col, "Waited") 15.379 + 15.380 + # display ex count 15.381 + row += 1 15.382 + col = 2 15.383 + display(stdscr, row, col, "%d" % dom) 15.384 + 15.385 + col += 28 15.386 + display(stdscr, row, col, "%d/s" % h2[dom][4]) 15.387 + col += 42 15.388 + display(stdscr, row, col, "%d" % h1[dom][4]) 15.389 + col += 18 15.390 + display(stdscr, row, col, "Execution count") 15.391 + 15.392 + # display io count 15.393 + row += 1 15.394 + col = 2 15.395 + display(stdscr, row, col, "%d" % dom) 15.396 + col += 4 15.397 + display(stdscr, row, col, "%d/s" % h2[dom][5][0]) 15.398 + col += 24 15.399 + display(stdscr, row, col, "%d/ex" % h2[dom][5][1]) 15.400 + col += 18 15.401 + display(stdscr, row, col, "%d" % h1[dom][5][0]) 15.402 + col += 24 15.403 + display(stdscr, row, col, "%3.2f/ex" % h1[dom][5][1]) 15.404 + col += 18 15.405 + display(stdscr, row, col, "I/O Count") 15.406 + 15.407 + #row += 1 15.408 + #stdscr.hline(row, 1, '-', maxx - 2) 15.409 + total_h1_cpu += h1[dom][0][1] 15.410 + total_h2_cpu += h2[dom][0][1] 15.411 + 15.412 + 15.413 + row += 1 15.414 + display(stdscr, row, 2, TOTALS % (total_h2_cpu, total_h1_cpu)) 15.415 + row += 1 15.416 +# display(stdscr, row, 2, 15.417 +# "\tFFP: %d (Min: %d, Max: %d)\t\t\tFFP: %d (Min: %d, Max %d)" % 15.418 +# (math.ceil(f2[1]), f2[0], f2[2], math.ceil(f1[1]), f1[0], f1[2]), _c.A_BOLD) 15.419 + 15.420 + if l1[1] > 1 : 15.421 + row += 1 15.422 + display(stdscr, row, 2, 15.423 + "\tRecords lost: %d (Min: %d, Max: %d)\t\t\tRecords lost: %d (Min: %d, Max %d)" % 15.424 + (math.ceil(l2[1]), l2[0], l2[2], math.ceil(l1[1]), l1[0], l1[2]), _c.A_BOLD) 15.425 + 15.426 + # grab a char from tty input; exit if interrupt hit 15.427 + try: 15.428 + c = stdscr.getch() 15.429 + except: 15.430 + break 15.431 + 15.432 + # q = quit 15.433 + if c == ord('q'): 15.434 + break 15.435 + 15.436 + # c = cycle to a new cpu of interest 15.437 + if c == ord('c'): 15.438 + cpu = (cpu + 1) % ncpu 15.439 + 15.440 + stdscr.erase() 15.441 + 15.442 + _c.nocbreak() 15.443 + stdscr.keypad(0) 15.444 + _c.echo() 15.445 + _c.endwin() 15.446 + shm.close() 15.447 + shmf.close() 15.448 + 15.449 + 15.450 +# simple functions to allow initialization of log files without actually 15.451 +# physically creating files that are never used; only on the first real 15.452 +# write does the file get created 15.453 +class Delayed(file): 15.454 + def __init__(self, filename, mode): 15.455 + self.filename = filename 15.456 + self.saved_mode = mode 15.457 + self.delay_data = "" 15.458 + self.opened = 0 15.459 + 15.460 + def delayed_write(self, str): 15.461 + self.delay_data = str 15.462 + 15.463 + def write(self, str): 15.464 + if not self.opened: 15.465 + self.file = open(self.filename, self.saved_mode) 15.466 + self.opened = 1 15.467 + self.file.write(self.delay_data) 15.468 + self.file.write(str) 15.469 + 15.470 + def flush(self): 15.471 + if self.opened: 15.472 + self.file.flush() 15.473 + 15.474 + def close(self): 15.475 + if self.opened: 15.476 + self.file.close() 15.477 + 15.478 + 15.479 +def writelog(): 15.480 + global options 15.481 + 15.482 + ncpu = 1 # number of cpu's 15.483 + slen = 0 # size of shared structure inc. padding 15.484 + 15.485 + shmf = open(SHM_FILE, "r+") 15.486 + shm = mmap.mmap(shmf.fileno(), QOS_DATA_SIZE) 15.487 + 15.488 + interval = 0 15.489 + outfiles = {} 15.490 + for dom in range(0, NDOMAINS): 15.491 + outfiles[dom] = Delayed("%s-dom%d.log" % (options.prefix, dom), 'w') 15.492 + outfiles[dom].delayed_write("# passed cpu dom cpu(tot) cpu(%) cpu/ex allocated/ex blocked(tot) blocked(%) blocked/io waited(tot) waited(%) waited/ex ex/s io(tot) io/ex\n") 15.493 + 15.494 + while options.duration == 0 or interval < (options.duration * 1000): 15.495 + for cpuidx in range(0, ncpu): 15.496 + idx = cpuidx * slen # offset needed in mmap file 15.497 + 15.498 + 15.499 + samples = [] 15.500 + doms = [] 15.501 + 15.502 + for i in range(0, NSAMPLES): 15.503 + len = struct.calcsize(ST_QDATA) 15.504 + sample = struct.unpack(ST_QDATA, shm[idx:idx+len]) 15.505 + samples.append(sample) 15.506 + idx += len 15.507 + 15.508 + for i in range(0, NDOMAINS): 15.509 + len = struct.calcsize(ST_DOM_INFO) 15.510 + dom = struct.unpack(ST_DOM_INFO, shm[idx:idx+len]) 15.511 + doms.append(dom) 15.512 + idx += len 15.513 + 15.514 + len = struct.calcsize("4i") 15.515 + oldncpu = ncpu 15.516 + (next, ncpu, slen, freq) = struct.unpack("4i", shm[idx:idx+len]) 15.517 + idx += len 15.518 + 15.519 + if oldncpu != ncpu: 15.520 + shm = mmap.mmap(shmf.fileno(), ncpu*slen) 15.521 + 15.522 + startat = next - 1 15.523 + if next + 10 < NSAMPLES: 15.524 + endat = next + 10 15.525 + else: 15.526 + endat = 10 15.527 + 15.528 + [h1,l1, f1] = summarize(startat, endat, options.interval * 10**6, samples) 15.529 + for dom in range(0, NDOMAINS): 15.530 + if h1[dom][0][1] > 0 or dom == NDOMAINS - 1: 15.531 + outfiles[dom].write("%.3f %d %d %.3f %.3f %.3f %.3f %.3f %.3f %.3f %.3f %.3f %.3f %.3f %.3f %.3f\n" % 15.532 + (interval, cpuidx, dom, 15.533 + h1[dom][0][0], h1[dom][0][1], h1[dom][0][2], 15.534 + h1[dom][1], 15.535 + h1[dom][2][0], h1[dom][2][1], h1[dom][2][2], 15.536 + h1[dom][3][0], h1[dom][3][1], h1[dom][3][2], 15.537 + h1[dom][4], 15.538 + h1[dom][5][0], h1[dom][5][1])) 15.539 + outfiles[dom].flush() 15.540 + 15.541 + interval += options.interval 15.542 + time.sleep(1) 15.543 + 15.544 + for dom in range(0, NDOMAINS): 15.545 + outfiles[dom].close() 15.546 + 15.547 +# start xenbaked 15.548 +def start_xenbaked(): 15.549 + global options 15.550 + global args 15.551 + 15.552 + os.system("killall -9 xenbaked") 15.553 + # assumes that xenbaked is in your path 15.554 + os.system("xenbaked --ms_per_sample=%d &" % 15.555 + options.mspersample) 15.556 + time.sleep(1) 15.557 + 15.558 +# stop xenbaked 15.559 +def stop_xenbaked(): 15.560 + os.system("killall -s INT xenbaked") 15.561 + 15.562 +def main(): 15.563 + global options 15.564 + global args 15.565 + global domains 15.566 + 15.567 + parser = setup_cmdline_parser() 15.568 + (options, args) = parser.parse_args() 15.569 + 15.570 + start_xenbaked() 15.571 + if options.live: 15.572 + show_livestats() 15.573 + else: 15.574 + try: 15.575 + writelog() 15.576 + except: 15.577 + print 'Quitting.' 15.578 + stop_xenbaked() 15.579 + 15.580 +if __name__ == "__main__": 15.581 + main()
16.1 --- a/tools/xentrace/setsize.c Tue Nov 15 15:56:47 2005 +0100 16.2 +++ b/tools/xentrace/setsize.c Tue Nov 15 16:24:31 2005 +0100 16.3 @@ -9,9 +9,9 @@ int main(int argc, char * argv[]) 16.4 int xc_handle = xc_interface_open(); 16.5 16.6 if (xc_tbuf_get_size(xc_handle, &size) != 0) { 16.7 - perror("Failure to get tbuf info from Xen. Guess size is 0."); 16.8 - printf("This may mean that tracing is not compiled into xen.\n"); 16.9 - exit(1); 16.10 + perror("Failure to get tbuf info from Xen. Guess size is 0"); 16.11 + printf("This may mean that tracing is not enabled in xen.\n"); 16.12 + // exit(1); 16.13 } 16.14 else 16.15 printf("Current tbuf size: 0x%x\n", size); 16.16 @@ -25,9 +25,10 @@ int main(int argc, char * argv[]) 16.17 perror("set_size Hypercall failure"); 16.18 exit(1); 16.19 } 16.20 + printf("set_size succeeded.\n"); 16.21 16.22 if (xc_tbuf_get_size(xc_handle, &size) != 0) 16.23 - perror("Failure to get tbuf info from Xen. Guess size is 0."); 16.24 + perror("Failure to get tbuf info from Xen. Tracing must be enabled first"); 16.25 else 16.26 printf("New tbuf size: 0x%x\n", size); 16.27
17.1 --- a/tools/xm-test/lib/XmTestLib/Console.py Tue Nov 15 15:56:47 2005 +0100 17.2 +++ b/tools/xm-test/lib/XmTestLib/Console.py Tue Nov 15 16:24:31 2005 +0100 17.3 @@ -62,26 +62,37 @@ class XmConsole: 17.4 self.historySaveCmds = historySaveCmds 17.5 self.debugMe = False 17.6 self.limit = None 17.7 + self.delay = 2 17.8 17.9 consoleCmd = ["/usr/sbin/xm", "xm", "console", domain] 17.10 17.11 - if verbose: 17.12 - print "Console executing: " + str(consoleCmd) 17.13 + start = time.time() 17.14 + 17.15 + while (time.time() - start) < self.TIMEOUT: 17.16 + if verbose: 17.17 + print "Console executing: %s" % str(consoleCmd) 17.18 17.19 - pid, fd = pty.fork() 17.20 + pid, fd = pty.fork() 17.21 17.22 - if pid == 0: 17.23 - os.execvp("/usr/sbin/xm", consoleCmd[1:]) 17.24 + if pid == 0: 17.25 + os.execvp("/usr/sbin/xm", consoleCmd[1:]) 17.26 + 17.27 + self.consolePid = pid 17.28 + self.consoleFd = fd 17.29 17.30 - self.consolePid = pid 17.31 - self.consoleFd = fd 17.32 - 17.33 - tty.setraw(self.consoleFd, termios.TCSANOW) 17.34 + tty.setraw(self.consoleFd, termios.TCSANOW) 17.35 17.36 - bytes = self.__chewall(self.consoleFd) 17.37 - if bytes < 0: 17.38 - raise ConsoleError("Console didn't respond") 17.39 + bytes = self.__chewall(self.consoleFd) 17.40 + 17.41 + if bytes > 0: 17.42 + return 17.43 17.44 + if verbose: 17.45 + print "Console didn't attach, waiting %i sec..." % self.delay 17.46 + time.sleep(self.delay) 17.47 + 17.48 + raise ConsoleError("Console didn't respond after %i secs" % self.TIMEOUT) 17.49 + 17.50 def __addToHistory(self, line): 17.51 self.historyBuffer.append(line) 17.52 self.historyLines += 1
18.1 --- a/tools/xm-test/lib/XmTestLib/XenDomain.py Tue Nov 15 15:56:47 2005 +0100 18.2 +++ b/tools/xm-test/lib/XmTestLib/XenDomain.py Tue Nov 15 16:24:31 2005 +0100 18.3 @@ -228,7 +228,7 @@ class XmTestDomain(XenDomain): 18.4 # status, output = traceCommand("xm list") 18.5 18.6 XenDomain.start(self) 18.7 - waitForBoot() 18.8 +# waitForBoot() 18.9 18.10 def startNow(self): 18.11 XenDomain.start(self)
19.1 --- a/xen/arch/x86/domain.c Tue Nov 15 15:56:47 2005 +0100 19.2 +++ b/xen/arch/x86/domain.c Tue Nov 15 16:24:31 2005 +0100 19.3 @@ -578,7 +578,7 @@ static void load_segments(struct vcpu *n 19.4 put_user(regs->rcx, rsp-11) ) 19.5 { 19.6 DPRINTK("Error while creating failsafe callback frame.\n"); 19.7 - domain_crash(); 19.8 + domain_crash(n->domain); 19.9 } 19.10 19.11 regs->entry_vector = TRAP_syscall;
20.1 --- a/xen/arch/x86/mm.c Tue Nov 15 15:56:47 2005 +0100 20.2 +++ b/xen/arch/x86/mm.c Tue Nov 15 16:24:31 2005 +0100 20.3 @@ -1461,6 +1461,22 @@ int get_page_type(struct pfn_info *page, 20.4 { 20.5 if ( unlikely((x & PGT_type_mask) != (type & PGT_type_mask) ) ) 20.6 { 20.7 + if ( current->domain == page_get_owner(page) ) 20.8 + { 20.9 + /* 20.10 + * This ensures functions like set_gdt() see up-to-date 20.11 + * type info without needing to clean up writable p.t. 20.12 + * state on the fast path. 20.13 + */ 20.14 + LOCK_BIGLOCK(current->domain); 20.15 + cleanup_writable_pagetable(current->domain); 20.16 + y = page->u.inuse.type_info; 20.17 + UNLOCK_BIGLOCK(current->domain); 20.18 + /* Can we make progress now? */ 20.19 + if ( ((y & PGT_type_mask) == (type & PGT_type_mask)) || 20.20 + ((y & PGT_count_mask) == 0) ) 20.21 + goto again; 20.22 + } 20.23 if ( ((x & PGT_type_mask) != PGT_l2_page_table) || 20.24 ((type & PGT_type_mask) != PGT_l1_page_table) ) 20.25 MEM_LOG("Bad type (saw %" PRtype_info 20.26 @@ -2529,7 +2545,7 @@ int do_update_va_mapping(unsigned long v 20.27 * not enough information in just a gpte to figure out how to 20.28 * (re-)shadow this entry. 20.29 */ 20.30 - domain_crash(); 20.31 + domain_crash(d); 20.32 } 20.33 20.34 rc = shadow_do_update_va_mapping(va, val, v); 20.35 @@ -2918,7 +2934,6 @@ int revalidate_l1( 20.36 { 20.37 l1_pgentry_t ol1e, nl1e; 20.38 int modified = 0, i; 20.39 - struct vcpu *v; 20.40 20.41 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ ) 20.42 { 20.43 @@ -2944,7 +2959,6 @@ int revalidate_l1( 20.44 20.45 if ( unlikely(!get_page_from_l1e(nl1e, d)) ) 20.46 { 20.47 - MEM_LOG("ptwr: Could not re-validate l1 page"); 20.48 /* 20.49 * Make the remaining p.t's consistent before crashing, so the 20.50 * reference counts are correct. 20.51 @@ -2953,9 +2967,8 @@ int revalidate_l1( 20.52 (L1_PAGETABLE_ENTRIES - i) * sizeof(l1_pgentry_t)); 20.53 20.54 /* Crash the offending domain. */ 20.55 - set_bit(_DOMF_ctrl_pause, &d->domain_flags); 20.56 - for_each_vcpu ( d, v ) 20.57 - vcpu_sleep_nosync(v); 20.58 + MEM_LOG("ptwr: Could not revalidate l1 page"); 20.59 + domain_crash(d); 20.60 break; 20.61 } 20.62 20.63 @@ -3066,7 +3079,7 @@ static int ptwr_emulated_update( 20.64 unsigned int bytes, 20.65 unsigned int do_cmpxchg) 20.66 { 20.67 - unsigned long pfn; 20.68 + unsigned long pfn, l1va; 20.69 struct pfn_info *page; 20.70 l1_pgentry_t pte, ol1e, nl1e, *pl1e; 20.71 struct domain *d = current->domain; 20.72 @@ -3103,6 +3116,17 @@ static int ptwr_emulated_update( 20.73 old |= full; 20.74 } 20.75 20.76 + /* 20.77 + * We must not emulate an update to a PTE that is temporarily marked 20.78 + * writable by the batched ptwr logic, else we can corrupt page refcnts! 20.79 + */ 20.80 + if ( ((l1va = d->arch.ptwr[PTWR_PT_ACTIVE].l1va) != 0) && 20.81 + (l1_linear_offset(l1va) == l1_linear_offset(addr)) ) 20.82 + ptwr_flush(d, PTWR_PT_ACTIVE); 20.83 + if ( ((l1va = d->arch.ptwr[PTWR_PT_INACTIVE].l1va) != 0) && 20.84 + (l1_linear_offset(l1va) == l1_linear_offset(addr)) ) 20.85 + ptwr_flush(d, PTWR_PT_INACTIVE); 20.86 + 20.87 /* Read the PTE that maps the page being updated. */ 20.88 if (__copy_from_user(&pte, &linear_pg_table[l1_linear_offset(addr)], 20.89 sizeof(pte))) 20.90 @@ -3128,7 +3152,10 @@ static int ptwr_emulated_update( 20.91 /* Check the new PTE. */ 20.92 nl1e = l1e_from_intpte(val); 20.93 if ( unlikely(!get_page_from_l1e(nl1e, d)) ) 20.94 + { 20.95 + MEM_LOG("ptwr_emulate: could not get_page_from_l1e()"); 20.96 return X86EMUL_UNHANDLEABLE; 20.97 + } 20.98 20.99 /* Checked successfully: do the update (write or cmpxchg). */ 20.100 pl1e = map_domain_page(page_to_pfn(page)); 20.101 @@ -3251,6 +3278,9 @@ int ptwr_do_page_fault(struct domain *d, 20.102 goto emulate; 20.103 #endif 20.104 20.105 + PTWR_PRINTK("ptwr_page_fault on l1 pt at va %lx, pfn %lx, eip %lx\n", 20.106 + addr, pfn, (unsigned long)regs->eip); 20.107 + 20.108 /* Get the L2 index at which this L1 p.t. is always mapped. */ 20.109 l2_idx = page->u.inuse.type_info & PGT_va_mask; 20.110 if ( unlikely(l2_idx >= PGT_va_unknown) ) 20.111 @@ -3295,10 +3325,6 @@ int ptwr_do_page_fault(struct domain *d, 20.112 goto emulate; 20.113 } 20.114 20.115 - PTWR_PRINTK("[%c] page_fault on l1 pt at va %lx, pt for %08lx, " 20.116 - "pfn %lx\n", PTWR_PRINT_WHICH, 20.117 - addr, l2_idx << L2_PAGETABLE_SHIFT, pfn); 20.118 - 20.119 /* 20.120 * We only allow one ACTIVE and one INACTIVE p.t. to be updated at at 20.121 * time. If there is already one, we must flush it out. 20.122 @@ -3317,6 +3343,10 @@ int ptwr_do_page_fault(struct domain *d, 20.123 goto emulate; 20.124 } 20.125 20.126 + PTWR_PRINTK("[%c] batched ptwr_page_fault at va %lx, pt for %08lx, " 20.127 + "pfn %lx\n", PTWR_PRINT_WHICH, addr, 20.128 + l2_idx << L2_PAGETABLE_SHIFT, pfn); 20.129 + 20.130 d->arch.ptwr[which].l1va = addr | 1; 20.131 d->arch.ptwr[which].l2_idx = l2_idx; 20.132 d->arch.ptwr[which].vcpu = current; 20.133 @@ -3348,7 +3378,7 @@ int ptwr_do_page_fault(struct domain *d, 20.134 /* Toss the writable pagetable state and crash. */ 20.135 unmap_domain_page(d->arch.ptwr[which].pl1e); 20.136 d->arch.ptwr[which].l1va = 0; 20.137 - domain_crash(); 20.138 + domain_crash(d); 20.139 return 0; 20.140 } 20.141
21.1 --- a/xen/arch/x86/shadow_public.c Tue Nov 15 15:56:47 2005 +0100 21.2 +++ b/xen/arch/x86/shadow_public.c Tue Nov 15 16:24:31 2005 +0100 21.3 @@ -239,13 +239,13 @@ static pagetable_t page_table_convert(st 21.4 21.5 l4page = alloc_domheap_page(NULL); 21.6 if (l4page == NULL) 21.7 - domain_crash(); 21.8 + domain_crash(d); 21.9 l4 = map_domain_page(page_to_pfn(l4page)); 21.10 memset(l4, 0, PAGE_SIZE); 21.11 21.12 l3page = alloc_domheap_page(NULL); 21.13 if (l3page == NULL) 21.14 - domain_crash(); 21.15 + domain_crash(d); 21.16 l3 = map_domain_page(page_to_pfn(l3page)); 21.17 memset(l3, 0, PAGE_SIZE); 21.18
22.1 --- a/xen/arch/x86/vmx.c Tue Nov 15 15:56:47 2005 +0100 22.2 +++ b/xen/arch/x86/vmx.c Tue Nov 15 16:24:31 2005 +0100 22.3 @@ -191,12 +191,12 @@ static inline int long_mode_do_msr_read( 22.4 case MSR_FS_BASE: 22.5 if (!(VMX_LONG_GUEST(vc))) 22.6 /* XXX should it be GP fault */ 22.7 - domain_crash(); 22.8 + domain_crash(vc->domain); 22.9 __vmread(GUEST_FS_BASE, &msr_content); 22.10 break; 22.11 case MSR_GS_BASE: 22.12 if (!(VMX_LONG_GUEST(vc))) 22.13 - domain_crash(); 22.14 + domain_crash(vc->domain); 22.15 __vmread(GUEST_GS_BASE, &msr_content); 22.16 break; 22.17 case MSR_SHADOW_GS_BASE: 22.18 @@ -260,7 +260,7 @@ static inline int long_mode_do_msr_write 22.19 case MSR_FS_BASE: 22.20 case MSR_GS_BASE: 22.21 if (!(VMX_LONG_GUEST(vc))) 22.22 - domain_crash(); 22.23 + domain_crash(vc->domain); 22.24 if (!IS_CANO_ADDRESS(msr_content)){ 22.25 VMX_DBG_LOG(DBG_LEVEL_1, "Not cano address of msr write\n"); 22.26 vmx_inject_exception(vc, TRAP_gp_fault, 0); 22.27 @@ -273,7 +273,7 @@ static inline int long_mode_do_msr_write 22.28 22.29 case MSR_SHADOW_GS_BASE: 22.30 if (!(VMX_LONG_GUEST(vc))) 22.31 - domain_crash(); 22.32 + domain_crash(vc->domain); 22.33 vc->arch.arch_vmx.msr_content.shadow_gs = msr_content; 22.34 wrmsrl(MSR_SHADOW_GS_BASE, msr_content); 22.35 break;
23.1 --- a/xen/arch/x86/vmx_vlapic.c Tue Nov 15 15:56:47 2005 +0100 23.2 +++ b/xen/arch/x86/vmx_vlapic.c Tue Nov 15 16:24:31 2005 +0100 23.3 @@ -28,7 +28,7 @@ 23.4 #include <asm/vmx.h> 23.5 #include <asm/vmx_platform.h> 23.6 #include <asm/vmx_vlapic.h> 23.7 - 23.8 +#include <asm/vmx_vioapic.h> 23.9 #include <xen/lib.h> 23.10 #include <xen/sched.h> 23.11 #include <asm/current.h> 23.12 @@ -322,10 +322,8 @@ vlapic_EOI_set(struct vlapic *vlapic) 23.13 vlapic_clear_isr(vlapic, vector); 23.14 vlapic_update_ppr(vlapic); 23.15 23.16 - if (test_and_clear_bit(vector, &vlapic->tmr[0])) { 23.17 - extern void ioapic_update_EOI(struct domain *d, int vector); 23.18 + if (test_and_clear_bit(vector, &vlapic->tmr[0])) 23.19 ioapic_update_EOI(vlapic->domain, vector); 23.20 - } 23.21 } 23.22 23.23 int vlapic_check_vector(struct vlapic *vlapic,
24.1 --- a/xen/arch/x86/vmx_vmcs.c Tue Nov 15 15:56:47 2005 +0100 24.2 +++ b/xen/arch/x86/vmx_vmcs.c Tue Nov 15 16:24:31 2005 +0100 24.3 @@ -157,13 +157,13 @@ static void vmx_map_io_shared_page(struc 24.4 mpfn = get_mfn_from_pfn(E820_MAP_PAGE >> PAGE_SHIFT); 24.5 if (mpfn == INVALID_MFN) { 24.6 printk("Can not find E820 memory map page for VMX domain.\n"); 24.7 - domain_crash(); 24.8 + domain_crash(d); 24.9 } 24.10 24.11 p = map_domain_page(mpfn); 24.12 if (p == NULL) { 24.13 printk("Can not map E820 memory map page for VMX domain.\n"); 24.14 - domain_crash(); 24.15 + domain_crash(d); 24.16 } 24.17 24.18 e820_map_nr = *(p + E820_MAP_NR_OFFSET); 24.19 @@ -182,7 +182,7 @@ static void vmx_map_io_shared_page(struc 24.20 printk("Can not get io request shared page" 24.21 " from E820 memory map for VMX domain.\n"); 24.22 unmap_domain_page(p); 24.23 - domain_crash(); 24.24 + domain_crash(d); 24.25 } 24.26 unmap_domain_page(p); 24.27 24.28 @@ -190,13 +190,13 @@ static void vmx_map_io_shared_page(struc 24.29 mpfn = get_mfn_from_pfn(gpfn); 24.30 if (mpfn == INVALID_MFN) { 24.31 printk("Can not find io request shared page for VMX domain.\n"); 24.32 - domain_crash(); 24.33 + domain_crash(d); 24.34 } 24.35 24.36 p = map_domain_page(mpfn); 24.37 if (p == NULL) { 24.38 printk("Can not map io request shared page for VMX domain.\n"); 24.39 - domain_crash(); 24.40 + domain_crash(d); 24.41 } 24.42 d->arch.vmx_platform.shared_page_va = (unsigned long)p; 24.43
25.1 --- a/xen/common/domain.c Tue Nov 15 15:56:47 2005 +0100 25.2 +++ b/xen/common/domain.c Tue Nov 15 16:24:31 2005 +0100 25.3 @@ -125,18 +125,27 @@ void domain_kill(struct domain *d) 25.4 } 25.5 25.6 25.7 -void domain_crash(void) 25.8 +void domain_crash(struct domain *d) 25.9 { 25.10 - printk("Domain %d (vcpu#%d) crashed on cpu#%d:\n", 25.11 - current->domain->domain_id, current->vcpu_id, smp_processor_id()); 25.12 - show_registers(guest_cpu_user_regs()); 25.13 - domain_shutdown(SHUTDOWN_crash); 25.14 + if ( d == current->domain ) 25.15 + { 25.16 + printk("Domain %d (vcpu#%d) crashed on cpu#%d:\n", 25.17 + d->domain_id, current->vcpu_id, smp_processor_id()); 25.18 + show_registers(guest_cpu_user_regs()); 25.19 + } 25.20 + else 25.21 + { 25.22 + printk("Domain %d reported crashed by domain %d on cpu#%d:\n", 25.23 + d->domain_id, current->domain->domain_id, smp_processor_id()); 25.24 + } 25.25 + 25.26 + domain_shutdown(d, SHUTDOWN_crash); 25.27 } 25.28 25.29 25.30 void domain_crash_synchronous(void) 25.31 { 25.32 - domain_crash(); 25.33 + domain_crash(current->domain); 25.34 for ( ; ; ) 25.35 do_softirq(); 25.36 } 25.37 @@ -178,10 +187,9 @@ static __init int domain_shutdown_finali 25.38 __initcall(domain_shutdown_finaliser_init); 25.39 25.40 25.41 -void domain_shutdown(u8 reason) 25.42 +void domain_shutdown(struct domain *d, u8 reason) 25.43 { 25.44 - struct domain *d = current->domain; 25.45 - struct vcpu *v; 25.46 + struct vcpu *v; 25.47 25.48 if ( d->domain_id == 0 ) 25.49 {
26.1 --- a/xen/common/grant_table.c Tue Nov 15 15:56:47 2005 +0100 26.2 +++ b/xen/common/grant_table.c Tue Nov 15 16:24:31 2005 +0100 26.3 @@ -29,6 +29,7 @@ 26.4 #include <xen/shadow.h> 26.5 #include <xen/mm.h> 26.6 #include <acm/acm_hooks.h> 26.7 +#include <xen/trace.h> 26.8 26.9 #if defined(CONFIG_X86_64) 26.10 #define GRANT_PTE_FLAGS (_PAGE_PRESENT|_PAGE_ACCESSED|_PAGE_DIRTY|_PAGE_USER) 26.11 @@ -379,6 +380,8 @@ static int 26.12 } 26.13 } 26.14 26.15 + TRACE_1D(TRC_MEM_PAGE_GRANT_MAP, dom); 26.16 + 26.17 ld->grant_table->maptrack[handle].domid = dom; 26.18 ld->grant_table->maptrack[handle].ref_and_flags = 26.19 (ref << MAPTRACK_REF_SHIFT) | 26.20 @@ -463,6 +466,8 @@ static int 26.21 return GNTST_bad_domain; 26.22 } 26.23 26.24 + TRACE_1D(TRC_MEM_PAGE_GRANT_UNMAP, dom); 26.25 + 26.26 act = &rd->grant_table->active[ref]; 26.27 sha = &rd->grant_table->shared[ref]; 26.28 26.29 @@ -802,6 +807,8 @@ gnttab_transfer( 26.30 page_set_owner(page, e); 26.31 26.32 spin_unlock(&e->page_alloc_lock); 26.33 + 26.34 + TRACE_1D(TRC_MEM_PAGE_GRANT_TRANSFER, e->domain_id); 26.35 26.36 /* Tell the guest about its new page frame. */ 26.37 sha = &e->grant_table->shared[gop->ref];
27.1 --- a/xen/common/schedule.c Tue Nov 15 15:56:47 2005 +0100 27.2 +++ b/xen/common/schedule.c Tue Nov 15 16:24:31 2005 +0100 27.3 @@ -13,15 +13,6 @@ 27.4 * 27.5 */ 27.6 27.7 -/*#define WAKE_HISTO*/ 27.8 -/*#define BLOCKTIME_HISTO*/ 27.9 - 27.10 -#if defined(WAKE_HISTO) 27.11 -#define BUCKETS 31 27.12 -#elif defined(BLOCKTIME_HISTO) 27.13 -#define BUCKETS 200 27.14 -#endif 27.15 - 27.16 #include <xen/config.h> 27.17 #include <xen/init.h> 27.18 #include <xen/lib.h> 27.19 @@ -45,6 +36,8 @@ extern void arch_getdomaininfo_ctxt(stru 27.20 static char opt_sched[10] = "sedf"; 27.21 string_param("sched", opt_sched); 27.22 27.23 +/*#define WAKE_HISTO*/ 27.24 +/*#define BLOCKTIME_HISTO*/ 27.25 #if defined(WAKE_HISTO) 27.26 #define BUCKETS 31 27.27 #elif defined(BLOCKTIME_HISTO) 27.28 @@ -205,9 +198,7 @@ void vcpu_wake(struct vcpu *v) 27.29 if ( likely(domain_runnable(v)) ) 27.30 { 27.31 SCHED_OP(wake, v); 27.32 -#ifdef WAKE_HISTO 27.33 v->wokenup = NOW(); 27.34 -#endif 27.35 } 27.36 clear_bit(_VCPUF_cpu_migrated, &v->vcpu_flags); 27.37 spin_unlock_irqrestore(&schedule_data[v->processor].schedule_lock, flags); 27.38 @@ -267,7 +258,7 @@ long do_sched_op(int cmd, unsigned long 27.39 { 27.40 TRACE_3D(TRC_SCHED_SHUTDOWN, 27.41 current->domain->domain_id, current->vcpu_id, arg); 27.42 - domain_shutdown((u8)arg); 27.43 + domain_shutdown(current->domain, (u8)arg); 27.44 break; 27.45 } 27.46 27.47 @@ -416,11 +407,26 @@ static void __enter_scheduler(void) 27.48 return continue_running(prev); 27.49 } 27.50 27.51 + TRACE_2D(TRC_SCHED_SWITCH_INFPREV, 27.52 + prev->domain->domain_id, now - prev->lastschd); 27.53 + TRACE_3D(TRC_SCHED_SWITCH_INFNEXT, 27.54 + next->domain->domain_id, now - next->wokenup, r_time); 27.55 + 27.56 clear_bit(_VCPUF_running, &prev->vcpu_flags); 27.57 set_bit(_VCPUF_running, &next->vcpu_flags); 27.58 27.59 perfc_incrc(sched_ctx); 27.60 27.61 + /* 27.62 + * Logic of wokenup field in domain struct: 27.63 + * Used to calculate "waiting time", which is the time that a domain 27.64 + * spends being "runnable", but not actually running. wokenup is set 27.65 + * set whenever a domain wakes from sleeping. However, if wokenup is not 27.66 + * also set here then a preempted runnable domain will get a screwed up 27.67 + * "waiting time" value next time it is scheduled. 27.68 + */ 27.69 + prev->wokenup = NOW(); 27.70 + 27.71 #if defined(WAKE_HISTO) 27.72 if ( !is_idle_task(next->domain) && next->wokenup ) 27.73 {
28.1 --- a/xen/include/asm-ia64/vmx_vpd.h Tue Nov 15 15:56:47 2005 +0100 28.2 +++ b/xen/include/asm-ia64/vmx_vpd.h Tue Nov 15 16:24:31 2005 +0100 28.3 @@ -122,7 +122,7 @@ extern unsigned int opt_vmx_debug_level; 28.4 do { \ 28.5 printk("__vmx_bug at %s:%d\n", __FILE__, __LINE__); \ 28.6 show_registers(regs); \ 28.7 - domain_crash(); \ 28.8 + domain_crash(current->domain); \ 28.9 } while (0) 28.10 28.11 #endif //__ASSEMBLY__
29.1 --- a/xen/include/asm-x86/vmx_vioapic.h Tue Nov 15 15:56:47 2005 +0100 29.2 +++ b/xen/include/asm-x86/vmx_vioapic.h Tue Nov 15 16:24:31 2005 +0100 29.3 @@ -114,6 +114,8 @@ void vmx_vioapic_set_irq(struct domain * 29.4 29.5 int vmx_vioapic_add_lapic(struct vlapic *vlapic, struct vcpu *v); 29.6 29.7 +void ioapic_update_EOI(struct domain *d, int vector); 29.8 + 29.9 #ifdef VMX_DOMAIN_SAVE_RESTORE 29.10 void ioapic_save(QEMUFile* f, void* opaque); 29.11 int ioapic_load(QEMUFile* f, void* opaque, int version_id);
30.1 --- a/xen/include/public/trace.h Tue Nov 15 15:56:47 2005 +0100 30.2 +++ b/xen/include/public/trace.h Tue Nov 15 16:24:31 2005 +0100 30.3 @@ -14,6 +14,7 @@ 30.4 #define TRC_SCHED 0x0002f000 /* Xen Scheduler trace */ 30.5 #define TRC_DOM0OP 0x0004f000 /* Xen DOM0 operation trace */ 30.6 #define TRC_VMX 0x0008f000 /* Xen VMX trace */ 30.7 +#define TRC_MEM 0x000af000 /* Xen memory trace */ 30.8 #define TRC_ALL 0xfffff000 30.9 30.10 /* Trace subclasses */ 30.11 @@ -40,6 +41,12 @@ 30.12 #define TRC_SCHED_S_TIMER_FN (TRC_SCHED + 11) 30.13 #define TRC_SCHED_T_TIMER_FN (TRC_SCHED + 12) 30.14 #define TRC_SCHED_DOM_TIMER_FN (TRC_SCHED + 13) 30.15 +#define TRC_SCHED_SWITCH_INFPREV (TRC_SCHED + 14) 30.16 +#define TRC_SCHED_SWITCH_INFNEXT (TRC_SCHED + 15) 30.17 + 30.18 +#define TRC_MEM_PAGE_GRANT_MAP (TRC_MEM + 1) 30.19 +#define TRC_MEM_PAGE_GRANT_UNMAP (TRC_MEM + 2) 30.20 +#define TRC_MEM_PAGE_GRANT_TRANSFER (TRC_MEM + 3) 30.21 30.22 /* trace events per subclass */ 30.23 #define TRC_VMX_VMEXIT (TRC_VMXEXIT + 1)
31.1 --- a/xen/include/xen/sched.h Tue Nov 15 15:56:47 2005 +0100 31.2 +++ b/xen/include/xen/sched.h Tue Nov 15 16:24:31 2005 +0100 31.3 @@ -220,14 +220,15 @@ extern int set_info_guest(struct domain 31.4 struct domain *find_domain_by_id(domid_t dom); 31.5 extern void domain_destruct(struct domain *d); 31.6 extern void domain_kill(struct domain *d); 31.7 -extern void domain_shutdown(u8 reason); 31.8 +extern void domain_shutdown(struct domain *d, u8 reason); 31.9 extern void domain_pause_for_debugger(void); 31.10 31.11 /* 31.12 - * Mark current domain as crashed. This function returns: the domain is not 31.13 - * synchronously descheduled from any processor. 31.14 + * Mark specified domain as crashed. This function always returns, even if the 31.15 + * caller is the specified domain. The domain is not synchronously descheduled 31.16 + * from any processor. 31.17 */ 31.18 -extern void domain_crash(void); 31.19 +extern void domain_crash(struct domain *d); 31.20 31.21 /* 31.22 * Mark current domain as crashed and synchronously deschedule from the local