Jan Beulich [Mon, 20 Jan 2014 08:50:20 +0000 (09:50 +0100)]
compat/memory: fix build with old gcc
struct xen_add_to_physmap_batch's size field being uint16_t causes old
compiler versions to warn about the pointless range check done inside
compat_handle_okay().
Signed-off-by: Jan Beulich <jbeulich@suse.com> Tested-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
In public/memory.h, struct xen_add_to_physmap has 'space' as an unsigned int,
but struct xen_add_to_physmap_batch has 'space' as a uint16_t.
By defining xenmem_add_to_physmap_one() with space defined as uint16_t, the
now-common xenmem_add_to_physmap() implicitly truncates xatp->space from
unsigned int to uint16_t, which changes the space switch()'d upon.
This wouldn't be noticed with any upstream code (of which I am aware), but was
discovered because of the XenServer support for legacy Windows PV drivers,
which make XENMEM_add_to_physmap hypercalls using spaces with the top bit set.
The current Windows PV drivers don't do this any more, but we 'fix' Xen to
support running VMs with out-of-date tools.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Release-Ack: Ian Campbell <Ian.Campbell@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
Yang Zhang [Fri, 17 Jan 2014 15:00:21 +0000 (16:00 +0100)]
nested EPT: fixing wrong handling for L2 guest's direct mmio access
L2 guest will access the physical device directly(nested VT-d). For such access,
Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.
Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com> Acked-by: Tim Deegan <tim@xen.org>
Frediano Ziglio [Fri, 17 Jan 2014 14:58:27 +0000 (15:58 +0100)]
mce: fix race condition in mctelem_xchg_head
The function (mctelem_xchg_head()) used to exchange mce telemetry
list heads is racy. It may write to the head twice, with the second
write linking to an element in the wrong state.
If there are two threads, T1 inserting on committed list; and T2
trying to consume it.
1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
2. T1 is interrupted after the cmpxchg succeeded.
3. T2 gets the list and changes element A and updates the commit list
head.
4. T1 resumes, reads pointer to prev again and compare with result
from the cmpxchg which succeeded but in the meantime prev changed
in memory.
5. T1 thinks the cmpxchg failed and goes around the loop again,
linking head to A again.
To solve the race use temporary variable for prev pointer.
*linkp (which point to a field in the element) must be updated before
the cmpxchg() as after a successful cmpxchg the element might be
immediately removed and reinitialized.
The wmb() prior to the cmpchgptr() call is not necessary since it is
already a full memory barrier. This wmb() is thus removed.
Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com> Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
Ian Jackson [Mon, 13 Jan 2014 18:15:37 +0000 (18:15 +0000)]
xl: Always use "fast" migration resume protocol
As Ian Campbell writes in http://bugs.xenproject.org/xen/bug/30:
There are two mechanisms by which a suspend can be aborted and the
original domain resumed.
The older method is that the toolstack resets a bunch of state (see
tools/python/xen/xend/XendDomainInfo.py resumeDomain) and then
restarts the domain. The domain will see HYPERVISOR_suspend return 0
and will continue without any realisation that it is actually
running in the original domain and not in a new one. This method is
supposed to be implemented by libxl_domain_resume(suspend_cancel=0)
but it is not.
The other method is newer and in this case the toolstack arranges
that HYPERVISOR_suspend returns SUSPEND_CANCEL and restarts it. The
domain will observe this and realise that it has been restarted in
the same domain and will behave accordingly. This method is
implemented, correctly AFAIK, by
libxl_domain_resume(suspend_cancel=1).
Attempting to use the old method without doing all of the work simply
causes the guest to crash. Implementing the work required for old
method, or for checking that domains actually support the new method,
is not feasible at this stage of the 4.4 release.
So, always use the new method, without regard to the declarations of
support by the guest. This is a strict improvement: guests which do
in fact support the new method will work, whereas ones which don't are
no worse off.
There are two call sites of libxl_domain_resume that need fixing, both
in the migration error path.
With this change I observe a correct and successful resumption of a
Debian wheezy guest with a Linux 3.4.70 kernel after a migration
attempt which I arranged to fail by nobbling the block hotplug script.
Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com> Acked-by: Ian Campbell <Ian.Campbell@citrix.com> CC: konrad.wilk@oracle.com CC: David Vrabel <david.vrabel@citrix.com> CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Wei Liu [Mon, 13 Jan 2014 11:52:28 +0000 (11:52 +0000)]
libxl: disallow PCI device assignment for HVM guest when PoD is enabled
This replicates a Xend behavior, see ec789523749 ("xend: Dis-allow
device assignment if PoD is enabled.").
This change is restricted to HVM guest, as only HVM is relevant in the
counterpart in Xend. We're late in release cycle so the change should
only do what's necessary. Probably we can revisit it if we need to do
the same thing for PV guest in the future.
Signed-off-by: Wei Liu <wei.liu2@citrix.com> Cc: Ian Campbell <ian.campbell@citrix.com> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Julien Grall [Tue, 14 Jan 2014 13:36:55 +0000 (13:36 +0000)]
xen/arm: p2m: Correctly flush TLB in create_p2m_entries
The p2m is shared between VCPUs for each domain. Currently Xen only flush
TLB on the local PCPU. This could result to mismatch between the mapping in the
p2m and TLBs.
Flush TLB entries used by this domain on every PCPU. The flush can also be
moved out of the loop because:
- ALLOCATE: only called for dom0 RAM allocation, so the flush is never called
- INSERT: if valid = 1 that would means with have replaced a
page that already belongs to the domain. A VCPU can write on the wrong page.
This can happen for dom0 with the 1:1 mapping because the mapping is not
removed from the p2m.
- REMOVE: except for grant-table (replace_grant_host_mapping), each
call to guest_physmap_remove_page are protected by the callers via a
get_page -> .... -> guest_physmap_remove_page -> ... -> put_page. So
the page can't be allocated for another domain until the last put_page.
- RELINQUISH : the domain is not running anymore so we don't care...
Also avoid leaking a foreign page if the function is INSERTed a new mapping
on top of foreign mapping.
Signed-off-by: Julien Grall <julien.grall@linaro.org> Acked-by: Ian Campbell <ian.campbell@citrix.com>
Julien Grall [Thu, 9 Jan 2014 16:58:03 +0000 (16:58 +0000)]
xen/arm: correct flush_tlb_mask behaviour
On ARM, flush_tlb_mask is used in the common code:
- alloc_heap_pages: the flush is only be called if the new allocated
page was used by a domain before. So we need to flush only TLB non-secure
non-hyp inner-shareable.
- common/grant-table.c: every calls to flush_tlb_mask are used with
the current domain. A flush TLB by current VMID inner-shareable is enough.
The current code only flush hypervisor TLB on the current PCPU. For now,
flush TLBs non-secure non-hyp on every PCPUs.
Signed-off-by: Julien Grall <julien.grall@linaro.org> Acked-by: Ian Campbell <ian.campbell@citrix.com>
libxl: ocaml: guard x86-specific functions behind an ifdef
The various cpuid functions are not available on ARM, so this
makes them raise an OCaml exception. Omitting the functions
completely results in a link failure in oxenstored due to the
missing symbols, so this is preferable to the much bigger patch
that would result from adding conditional compilation into the
OCaml interfaces.
With this patch, oxenstored can successfully start a domain on
Xen/ARM.
Signed-off-by: Anil Madhavapeddy <anil@recoil.org> Acked-by: David Scott <dave.scott@eu.citrix.com> Acked-by: Ian Campbell <ian.campbell@citrix.com>
Even with this fix there is a period between the flush and the unmap where
processor may speculate data into the cache. The solution is to map this
region uncached or to use the HCR.DC bit to mark all guest accesses cached. 89eb02c2204a "xen: arm: force guest memory accesses to cacheable when MMU is
disabled" has arranged to do the latter.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Ian Campbell [Wed, 8 Jan 2014 14:09:01 +0000 (14:09 +0000)]
xen: arm: force guest memory accesses to cacheable when MMU is disabled
On ARM guest OSes are started with MMU and Caches disables (as they are on
native) however caching is enabled in the domain running the builder and
therefore we must ensure cache consistency.
The existing solution to this problem (a0035ecc0d82 "tools: libxc: flush data
cache after loading images into guest memory") is to flush the caches after
loading the various blobs into guest RAM. However this approach has two short
comings:
- The cache flush primitives available to userspace on arm32 are not
sufficient for our needs.
- There is a race between the cache flush and the unmap of the guest page
where the processor might speculatively dirty the cache line again.
(of these the second is the more fundamental)
This patch makes use of the the hardware functionality to force all accesses
made from guest mode to be cached (the HCR.DC == default cached bit). This
means that we don't need to worry about the domain builder's writes being
cached because the guests "uncached" accesses will actually be cached.
Unfortunately the use of HCR.DC is incompatible with the guest enabling its
MMU (SCTLR.M bit). Therefore we must trap accesses to the SCTLR so that we can
detect when this happens and disable HCR.DC. This is done with the HCR.TVM
(trap virtual memory controls) bit which also causes various other registers
to be trapped, all of which can be passed straight through to the underlying
register. Once the guest has enabled its MMU we no longer need to trap so
there is no ongoing overhead. In my tests Linux makes about half a dozen
accesses to these registers before the MMU is enabled, I would expect other
OSes to behave similarly (the sequence of writes needed to setup the MMU is
pretty obvious).
Apart from this unfortunate need to trap these accesses this approach is
incompatible with guests which attempt to do DMA operations with their MMU
disabled. In practice this means guests with passthrough which we do not yet
support. Since a typical guest (including dom0) does not access devices which
require DMA until after it is fully up and running with paging enabled the
main risk is to in-guest firmware which does DMA i.e. running EFI in a guest,
with a disk passed through and booting from that disk. Since we know that dom0
is not using any such firmware and we do not support device passthrough to
guests yet we can live with this restriction. Once passthrough is implemented
this will need to be revisited.
The patch includes a couple of seemingly unrelated but necessary changes:
- HSR_SYSREG_CRN_MASK was incorrectly defined, which happened to be benign
with the existing set of system register we handled, but broke with the new
ones introduced here.
- The defines used to decode the HSR system register fields were named the
same as the register. This breaks the accessor macros. This had gone
unnoticed because the handling of the existing trapped registers did not
require accessing the underlying hardware register. Rename those constants
with an HSR_SYSREG prefix (in line with HSR_CP32/64 for 32-bit registers).
This patch has survived thousands of boot loops on a Midway system.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Acked-by: Julien Grall <julien.grall@linaro.org>
Rob Hoes [Fri, 10 Jan 2014 13:52:04 +0000 (13:52 +0000)]
libxl: ocaml: use 'for_app_registration' in osevent callbacks
This allows the application to pass a token to libxl in the fd/timeout
registration callbacks, which it receives back in modification or
deregistration callbacks.
It turns out that this is essential for timeout handling, in order to
identify which timeout to change on a modify event.
Signed-off-by: Rob Hoes <rob.hoes@citrix.com> Acked-by: David Scott <dave.scott@eu.citrix.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
David Vrabel [Fri, 10 Jan 2014 16:46:33 +0000 (17:46 +0100)]
x86: map portion of kexec crash area that is within the direct map area
Commit 7113a45451a9f656deeff070e47672043ed83664 (kexec/x86: do not map
crash kernel area) causes fatal page faults when loading a crash
image. The attempt to zero the first control page allocated from the
crash region will fault as the VA return by map_domain_page() has no
mapping.
The fault will occur on non-debug builds of Xen when the crash area is
below 5 TiB (which will be most systems).
The assumption that the crash area mapping was not used is incorrect.
map_domain_page() is used when loading an image and building the
image's page tables to temporarily map the crash area, thus the
mapping is required if the crash area is in the direct map area.
Reintroduce the mapping, but only the portions of the crash area that
are within the direct map area.
Reported-by: Don Slutz <dslutz@verizon.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com> Tested-by: Don Slutz <dslutz@verizon.com> Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com> Tested-by: Daniel Kiper <daniel.kiper@oracle.com>
This is really just a band aid - kexec shouldn't rely on the crash area
being always mapped when in the direct mapping range (and it didn't use
to in its previous form). That's primarily because map_domain_page()
(needed when the area is outside the direct mapping range) may be
unusable when wanting to kexec due to a crash, but also because in the
case of PFN compression the kexec range (if specified on the command
line) could fall into a hole between used memory ranges (while we're
currently only ignoring memory at the top of the physical address
space, it's pretty clear that sooner or later we will want that
selection to become more sophisticated in order to maximize the memory
made use of).
Andrew Cooper [Fri, 10 Jan 2014 16:45:01 +0000 (17:45 +0100)]
dbg_rw_guest_mem: need to call put_gfn in error path
Using a 1G hvm domU (in grub) and gdbsx:
(gdb) set arch i8086
warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
of GDB. Attempting to continue with the default i8086 settings.
The target architecture is assumed to be i8086
(gdb) target remote localhost:9999
Remote debugging using localhost:9999
Remote debugging from host 127.0.0.1
0x0000d475 in ?? ()
(gdb) x/1xh 0x6ae9168b
Will reproduce this bug.
With a debug=y build you will get:
Assertion '!preempt_count()' failed at preempt.c:37
For a debug=n build you will get a dom0 VCPU hung (at some point) in:
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Don Slutz <dslutz@verizon.com> Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Stefan Bader [Wed, 8 Jan 2014 17:26:59 +0000 (18:26 +0100)]
libxl: Auto-assign NIC devids in initiate_domain_create
This will change initiate_domain_create to walk through NIC definitions
and automatically assign devids to those which have not assigned one.
The devids are needed later in domcreate_launch_dm (for HVM domains
using emulated NICs). The command string for starting the device-model
has those ids as part of its arguments.
Assignment of devids in the hotplug case is handled by libxl_device_nic_add
but that would be called too late in the startup case.
I also moved the call to libxl__device_nic_setdefault here as this seems
to be the only path leading there and avoids doing the loop a third time.
The two loops are trying to handle a case where the caller sets some devids
(not sure that should be valid) but leaves some unset.
Signed-off-by: Stefan Bader <stefan.bader@canonical.com> Acked-by: Ian Campbell <ian.campbell@citrix.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Wei Liu [Tue, 17 Dec 2013 22:53:45 +0000 (22:53 +0000)]
xl: create VFB for PV guest when VNC is specified
This replicates a Xend behavior. When you specify 'vnc=1' and there's no
'vfb=[]' in a PV guest's config file, xl parses all top level VNC options and
creates a VFB for you.
Reported-by: Konrad Wilk <konrad.wilk@oracle.com> Signed-off-by: Wei Liu <wei.liu2@citrix.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com> Acked-by: Ian Campbell <ian.campbell@citrix.com>
Anthony PERARD [Wed, 8 Jan 2014 08:17:55 +0000 (09:17 +0100)]
firmware: change level-triggered GPE event to a edge one for qemu-xen
This should help to reduce a CPU hotplug race window where a cpu hotplug
event while not be seen by the OS.
When hotplugging more than one vcpu, some of those vcpus might not be
seen as plugged by the guest.
This is what is currently happenning:
1. hw adds cpu, sets GPE.2 bit and sends SCI
2. OSPM gets SCI, reads GPE00.sts and masks GPE.2 bit in GPE00.en
3. OSPM executes _L02 (level-triggered event associate to cpu hotplug)
4. hw adds second cpu and sets GPE.2 bit but SCI is not asserted
since GPE00.en masks event
5. OSPM resets GPE.2 bit in GPE00.sts and umasks it in GPE00.en
as result event for step 4 is lost because step 5 clears it and OS
will not see added second cpu.
1. Disables the interrupt source (GPEx_BLK EN bit).
2. If an edge event, clears the status bit.
3. Performs one of the following:
* Dispatches to an ACPI-aware device driver.
* Queues the matching control method for execution.
* Manages a wake event using device _PRW objects.
4. If a level event, clears the status bit.
5. Enables the interrupt source.
So, by using edge-triggered General-Purpose Event instead of a
level-triggered GPE, OSPM is less likely to clear the status bit of the
addition of the second CPU. On step 5, QEMU will resend an interrupt if
the status bit is set.
This description apply also for PCI hotplug since the same step are
followed by QEMU, so we also change the GPE event type for PCI hotplug.
This does not apply to qemu-xen-traditional because it does not resend
an interrupt if necessary as a result of step 5.
Patch and description inspired by SeaBIOS's commit:
Replace level gpe event with edge gpe event for hot-plug handlers 9c6635bd48d39a1d17d0a73df6e577ef6bd0037c
from Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
Jan Beulich [Wed, 8 Jan 2014 08:04:48 +0000 (09:04 +0100)]
rename XENMEM_add_to_physmap_{range => batch}
The use of "range" here wasn't really correct - there are no ranges
involved. As the comment in the public header already correctly said,
all this is about is batching of XENMEM_add_to_physmap calls (with
the addition of having a way to specify a foreign domain for
XENMAPSPACE_gmfn_foreign).
Suggested-by: Ian Campbell <Ian.Campbell@citrix.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Ian Campbell <ian.campbell@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
Bob Liu [Thu, 12 Dec 2013 11:05:15 +0000 (19:05 +0800)]
tmem: check the return value of copy to guest
Use function copy_to_guest_offset/copy_to_guest directly and check their return
value.
This also fixes CID 1132754, and 1132755:
"Unchecked return value
If the function returns an error value, the error value may be mistaken for a
normal value. In tmem_copy_to_client_buf_offset: Value returned from a function
is not checked for errors before being used (CWE-252)"
And CID 1055125, 1055126, 1055127, 1055128, 1055129, 1055130
"Unchecked return value
If the function returns an error value, the error value may be mistaken for a
normal value. In <functions changed>: Value returned from a function is not
checked for errors before being used (CWE-252)"
Signed-off-by: Bob Liu <bob.liu@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Bob Liu [Thu, 12 Dec 2013 11:05:11 +0000 (19:05 +0800)]
tmem: cleanup: drop useless functions from header file
They are several one line functions in tmem_xen.h which are useless, this patch
embeded them into tmem.c directly.
Also modify void *tmem in struct domain to struct client *tmem_client in order
to make things more straightforward.
Signed-off-by: Bob Liu <bob.liu@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Acked-by: Keir Fraser <keir@xen.org>
Bob Liu [Thu, 12 Dec 2013 11:05:08 +0000 (19:05 +0800)]
tmem: cleanup: drop tmem_lock_all
tmem_lock_all is used for debug only, remove it from upstream to make
tmem source code more readable and easier to maintain.
And no_evict is meaningless without tmem_lock_all, this patch removes it
also.
[ Two threads will be stuck waiting forever if each holds a lock the other needs to acquire.
In alloc_heap_pages: Threads may try to acquire two locks in different orders, potentially
causing deadlock (CWE-833)]
Signed-off-by: Bob Liu <bob.liu@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Bob Liu [Thu, 12 Dec 2013 11:05:04 +0000 (19:05 +0800)]
tmem: cleanup: drop useless parameters from put/get page
Tmem only takes page as a unit, so parameters tmem_offset, pfn_offset and len in
do_tmem_put/get() are meaningless. All of the callers are using the same
value(tmem_offset=0, pfn_offset=0, len=PAGE_SIZE).
This patch simplifies tmem ignoring those useless parameters and use the default
value directly.
Signed-off-by: Bob Liu <bob.liu@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
David Vrabel [Wed, 8 Jan 2014 07:44:23 +0000 (08:44 +0100)]
evtchn/fifo: don't corrupt queues if an old tail is linked
An event may still be the tail of a queue even if the queue is now
empty (an 'old tail' event). There is logic to handle the case when
this old tail event needs to be added to the now empty queue (by
checking for q->tail == port).
However, this does not cover all cases.
1. An old tail may be re-added simultaneously with another event.
LINKED is set on the old tail, and the other CPU may misinterpret
this as the old tail still being valid and set LINK instead of
HEAD. All events on this queue will then be lost.
2. If the old tail event on queue A is moved to a different queue B
(by changing its VCPU or priority), the event may then be linked
onto queue B. When another event is linked onto queue A it will
check the old tail, see that it is linked (but on queue B) and
overwrite the LINK field, corrupting both queues.
When an event is linked, save the vcpu id and priority of the queue it
is being linked onto. Use this when linking an event to check if it
is an unlinked old tail event. If it is an old tail event, the old
queue is empty and old_q->tail is invalidated to ensure adding another
event to old_q will update HEAD. The tail is invalidated by setting
it to 0 since the event 0 is never linked.
The old_q->lock is held while setting LINKED to avoid the race with
the test of LINKED in evtchn_fifo_set_link().
Since a event channel may move queues after old_q->lock is acquired,
we must check that we have the correct lock and retry if not. Since
changing VCPUs or priority is expected to be rare events that are
serialized in the guest, we try at most 3 times before dropping the
event. This prevents a malicious guest from repeatedly adjusting
priority to prevent another domain from acquiring old_q->lock.
Signed-off-by: David Vrabel <david.vrabel@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
David Vrabel [Wed, 8 Jan 2014 07:43:36 +0000 (08:43 +0100)]
evtchn/fifo: initialize priority when events are bound
Event channel ports that are reused or that were not in the initial
bucket would have a non-default priority.
Add an init evtchn_port_op hook and use this to set the priority when
an event channel is bound.
Within this new evtchn_fifo_init() call, also check if the event is
already on a queue and print a warning, as this event may have its
first event delivered on a queue with the wrong VCPU or priority.
This guest is expected to prevent this (if it cares) by not unbinding
events that are still linked.
Reported-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
Jan Beulich [Tue, 7 Jan 2014 15:01:14 +0000 (16:01 +0100)]
IOMMU: make page table deallocation preemptible
This too can take an arbitrary amount of time.
In fact, the bulk of the work is being moved to a tasklet, as handling
the necessary preemption logic in line seems close to impossible given
that the teardown may also be invoked on error paths.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Xiantao Zhang <xiantao.zhang@intel.com>
Ian Campbell [Fri, 20 Dec 2013 15:08:08 +0000 (15:08 +0000)]
xen: arm: context switch the aux memory attribute registers
We appear to have somehow missed these. Linux doesn't actually use them and
none of the processors I've looked at actually define any bits in them (so
they are UNK/SBZP) but it is good form to context switch them anyway.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Acked-by: Julien Grall <julien.grall@linaro.org>
AMD/IOMMU: fix infinite loop due to ivrs_bdf_entries larger than 16-bit value
Certain AMD systems could have upto 0x10000 ivrs_bdf_entries.
However, the loop variable (bdf) is declared as u16 which causes
inifinite loop when parsing IOMMU event log with IO_PAGE_FAULT event.
This patch changes the variable to u32 instead.
Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Andrew Cooper [Tue, 7 Jan 2014 13:59:31 +0000 (14:59 +0100)]
VTD/DMAR: free() correct pointer on error from acpi_parse_one_atsr()
Free the allocated structure rather than the ACPI table ATS entry.
On further analysis, there is another memory leak. acpi_parse_dev_scope()
could allocate scope->devices, and return with -ENOMEM. All callers of
acpi_parse_dev_scope() would then free the underlying structure, loosing the
pointer.
These errors can only actually be reached through acpi_parse_dev_scope()
(which passes type = DMAR_TYPE), but I am quite surprised Coverity didn't spot
it.
Coverity-ID: 1146949 Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Andrew Cooper [Tue, 7 Jan 2014 13:57:15 +0000 (14:57 +0100)]
AMD/iommu_detect: don't leak iommu structure on error paths
Tweak the logic slightly to return the real errors from
get_iommu_{,msi_}capabilities(), which at the moment is no functional change.
Coverity-ID: 1146950 Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Tsahee Zidenberg [Sun, 22 Dec 2013 10:59:57 +0000 (12:59 +0200)]
ns16550: support ns16550a
Ns16550a devices are Ns16550 devices with additional capabilities.
Decare XEN is compatible with this device, to be able to use unmodified
devicetrees.
Ian Jackson [Tue, 17 Dec 2013 18:35:18 +0000 (18:35 +0000)]
libxc: Document xenctrl.h event channel calls
Provide semantic documentation for how the libxc calls relate to the
hypervisor interface, and how they are to be used.
Also document the bug (present at least in Linux 3.12) that setting
the evtchn fd to nonblocking doesn't in fact make xc_evtchn_pending
nonblocking, and describe the appropriate workaround.
Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com> CC: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by: Ian Campbell <ian.campbell@citrix.com> CC: Jan Beulich <JBeulich@suse.com> CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Document the event channel protocol in xenstore-paths.markdown, in the
section for ~/device/suspend/event-channel.
Protocol reverse-engineered from commentary and commit messages of 4539594d46f9 Add facility to get notification of domain suspend ... 17636f47a474 Teach xc_save to use event-channel-based ...
and implementations in
xc_save (current version)
libxl (current version)
linux-2.6.18-xen (mercurial 1241:2993033a77ca)
Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com> Acked-by: Ian Campbell <ian.campbell@citrix.com> CC: Keir Fraser <keir@xen.org> CC: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Ian Jackson [Tue, 17 Dec 2013 18:35:16 +0000 (18:35 +0000)]
xen: Document that EVTCHNOP_bind_interdomain signals
EVTCHNOP_bind_interdomain signals the event channel. Document this.
Also explain the usual use pattern.
Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com> Acked-by: Ian Campbell <ian.campbell@citrix.com> CC: Keir Fraser <keir@xen.org> CC: Jan Beulich <JBeulich@suse.com>
Ian Jackson [Tue, 17 Dec 2013 18:35:15 +0000 (18:35 +0000)]
xen: Document XEN_DOMCTL_subscribe
Arguably this domctl is misnamed. But, for now, document its actual
behaviour (reverse-engineered from the code and found in the commit
message for 4539594d46f9) under its actual name.
Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com> CC: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by: Ian Campbell <ian.campbell@citrix.com> CC: Shriram Rajagopalan <rshriram@cs.ubc.ca> CC: Jan Beulich <JBeulich@suse.com>
Julien Grall [Tue, 17 Dec 2013 14:28:19 +0000 (14:28 +0000)]
xen/arm: Allow ballooning working with 1:1 memory mapping
With the lack of iommu, dom0 must have a 1:1 memory mapping for all
these guest physical address. When the balloon decides to give back a
page to the kernel, this page must have the same address as previously.
Otherwise, we will loose the 1:1 mapping and will break DMA-capable
devices.
Signed-off-by: Julien Grall <julien.grall@linaro.org> Reviewed-by: Jan Beulich <jbeulich@suse.com> Cc: Keir Fraser <keir@xen.org> Acked-by: Ian Campbell <ian.campbell@citrix.com>
Yang Zhang [Tue, 7 Jan 2014 13:30:47 +0000 (14:30 +0100)]
VMX: Eliminate cr3 save/loading exiting when UG enabled
With the feature of unrestricted guest, there should be no vmexit
be triggered when guest accesses the cr3 in non-paging mode. This
patch will clear the cr3 save/loading bit in vmcs control filed to
eliminate cr3 access vmexit on UG avaliable hardware.
Jan Beulich [Tue, 7 Jan 2014 13:21:48 +0000 (14:21 +0100)]
IOMMU: make page table population preemptible
Since this can take an arbitrary amount of time, the rooting domctl as
well as all involved code must become aware of this requiring a
continuation.
The subject domain's rel_mem_list is being (ab)used for this, in a way
similar to and compatible with broken page offlining.
Further, operations get slightly re-ordered in assign_device(): IOMMU
page tables now get set up _before_ the first device gets assigned, at
once closing a small timing window in which the guest may already see
the device but wouldn't be able to access it.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Tim Deegan <tim@xen.org> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Xiantao Zhang <xiantao.zhang@intel.com>
Just like for all other hypercalls we shouldn't be modifying the input
structure - all of the fields are, even if not explicitly documented,
just inputs (the one OUT one really refers to the memory pointed to by
that handle rather than the handle itself).
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Tim Deegan <tim@xen.org> Acked-by: Keir Fraser <keir@xen.org> Acked-by: Ian Campbell <ian.campbell@citrix.com>
Jan Beulich [Fri, 20 Dec 2013 11:01:09 +0000 (12:01 +0100)]
fix XENMEM_add_to_physmap preemption handling
Just like for all other hypercalls we shouldn't be modifying the input
structure - all of the fields are, even if not explicitly documented,
just inputs.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Tim Deegan <tim@xen.org> Acked-by: Keir Fraser <keir@xen.org> Acked-by: Ian Campbell <ian.campbell@citrix.com>
Matthew Daley [Sat, 30 Nov 2013 00:20:04 +0000 (13:20 +1300)]
xenstore: sanity check incoming message body lengths
This is for the client-side receiving messages from xenstored, so there
is no security impact, unlike XSA-72.
Coverity-ID: 1055449
Coverity-ID: 1056028 Signed-off-by: Matthew Daley <mattd@bugfuzz.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Julien Grall [Wed, 18 Dec 2013 16:54:08 +0000 (16:54 +0000)]
xen/arm: p2m: Fix hypercall preemption when domain is relinquish memory mapping
The commit 84f29a9 "xen/arm: Add relinquish_p2m_mapping to remove reference on
every mapped page" doesn't save correctly the next gfn when the hypercall
is preempted.
Instead of storing the next gfn, it store the next mfn. Fix it by using
'addr' instead of 'maddr'.
Signed-off-by: Julien Grall <julien.grall@linaro.org> Acked-by: Ian Campbell <ian.campbell@citrix.com>
Julien Grall [Tue, 17 Dec 2013 16:27:57 +0000 (16:27 +0000)]
xen/arm: Set foreign page type to p2m_map_foreign
Xen needs to know that the current page belongs to another domain. Also take
a reference to this page.
The current process to add a foreign page is:
1) get the page from the foreign p2m
2) take a reference on the page with the foreign domain in parameters
3) add the page to the current domain p2m
If the foreign domain drops the page:
- before 2), get_page will return NULL because the page doesn't
belong anymore to the domain
- after 2), the current domain already have a reference. Write will
occur to an old page which is not yet released. It can corrupt the foreign
domain.
Signed-off-by: Julien Grall <julien.grall@linaro.org> Acked-by: Ian Campbell <ian.campbell@citrix.com>
Julien Grall [Tue, 17 Dec 2013 16:27:52 +0000 (16:27 +0000)]
xen/arm: Store p2m type in each page of the guest
Use the field 'avail' to store the type of the page. Rename it to 'type' for
convenience.
The information stored in this field will be retrieved in a future patch to
change the behaviour when the page is removed.
Also introduce guest_physmap_add_entry to map and set a specific p2m type for
a page.
Signed-off-by: Julien Grall <julien.grall@linaro.org> Acked-by: Ian Campbell <ian.campbell@citrix.com>
Julien Grall [Tue, 17 Dec 2013 16:27:51 +0000 (16:27 +0000)]
xen/arm: Implement p2m_type_t as an enum
Until now, Xen doesn't know the type of the page (ram, foreign page, mmio,...).
Introduce p2m_type_t with basic types:
- p2m_invalid: Nothing is mapped here
- p2m_ram_rw: Normal read/write guest RAM
- p2m_ram_ro: Read-only guest RAM
- p2m_mmio_direct: Read/write mapping of device memory
- p2m_map_foreign: RAM page from foreign guest
- p2m_grant_map_rw: Read/write grant mapping
- p2m_grant_map_ro: Read-only grant mapping
Signed-off-by: Julien Grall <julien.grall@linaro.org> Acked-by: Ian Campbell <ian.campbell@citrix.com>
Julien Grall [Tue, 17 Dec 2013 16:27:50 +0000 (16:27 +0000)]
xen/arm: move mfn_to_p2m_entry in arch/arm/p2m.c
The function mfn_to_p2m_entry will be extended in a following patch to handle
p2m_type_t. It will break compilation because p2m_type_t is not defined
(interdependence between includes).
It's easier to move the function in arch/arm/p2m.c and it's not harmful as the
function is only used in this file.
Signed-off-by: Julien Grall <julien.grall@linaro.org> Acked-by: Ian Campbell <ian.campbell@citrix.com>
Ian Campbell [Wed, 18 Dec 2013 13:39:14 +0000 (13:39 +0000)]
xen: arm: process XENMEM_add_to_physmap_range forwards not backwards.
Jan points out that processing the list backwards is rather counter intuitive
and that the effect of the hypercall can differ between forwards and backwards
processing (e.g. in the presence of duplicate idx or gpfn, which would be
unusual but as Jan says, users are a creative bunch)
Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Cc: Mukesh Rathor <mukesh.rathor@oracle.com>
Ian Campbell [Wed, 18 Dec 2013 11:54:46 +0000 (11:54 +0000)]
xen: arm: clarify cacheability requirements of hypercall arguments.
Accepting hypercall arguments which are either consistently in cached or
uncached is tricky and/or potentially slow, requiring a guest mapping lookup
to determine whether/when to do a cache clean or invalidate.
There are very few reasons, and no current use cases in practice, for a guest
to use uncached memory for their hypercall arguments. Therefore mandate that
all hypercall arguments must be mapped inner-cacheable.
Do not place any restriction on the outer-cacheability or on the cache
fill/flush strategy used.
If use cases arise then we can consider specific exemptions to this rule.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Ian Jackson [Tue, 26 Nov 2013 12:08:09 +0000 (12:08 +0000)]
libxl: Fix error handling in libxl__device_nic_from_xs_be
Previously, this function would leak the temporary return from xs_read for
handle and mac address. Fix both of these and the rest of the error handling.
This requires changing its return type and fixing the callers.
Introduce here a READ_BACKEND macro to make the code less repetitive.
Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Ian Campbell <ian.campbell@citrix.com>
[ ijc -- spell out what the leaks were in the commit message ]
Andrew Cooper [Wed, 11 Dec 2013 15:47:42 +0000 (15:47 +0000)]
tools/libxc: Fix error checking for xc_get_{cpu, node}map_size() callers
c/s 2e82c18cd850592ae9a1f682eb93965a868b5f2f changed the error returns of
xc_get_{cpu,node}map_size() to now include returning -1. This invalidated the
error checks from callers, which expected 0 to be the only error case.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Ian Campbell <Ian.Campbell@citrix.com> CC: Ian Jackson <Ian.Jackson@eu.citrix.com> CC: George Dunlap <george.dunlap@eu.citrix.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> CC: Ian Campbell <Ian.Campbell@citrix.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
Jan Beulich [Tue, 17 Dec 2013 15:39:39 +0000 (16:39 +0100)]
x86/memshr: fix preemption in relinquish_shared_pages()
For one, should hypercall_preempt_check() return false the first time
it gets called, it would never have got called again (because count,
being checked for equality, didn't get reset to zero).
And then, if there were a huge range of unshared pages, with count not
getting incremented at all in that case there would also not be any
preemption.
Fix this by using a biased increment (ratio 1:16 for unshared vs shared
pages), and flushing the count to zero in case of a "false" return from
hypercall_preempt_check().
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Tim Deegan <tim@xen.org>