Juergen Gross [Fri, 2 Oct 2020 11:20:41 +0000 (13:20 +0200)]
tools/libs: move official headers to common directory
Instead of each library having an own include directory move the
official headers to tools/include instead. This will drop the need to
link those headers to tools/include and there is no need any longer
to have library-specific include paths when building Xen.
While at it remove setting of the unused variable
PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
Jan Beulich [Fri, 23 Oct 2020 08:13:53 +0000 (10:13 +0200)]
x86emul: increase FPU save area in test harness/fuzzer
Running them on a system (or emulator) with AMX support requires this
to be quite a bit larger than 8k, to avoid triggering the assert() in
emul_test_init(). Bump all the way up to 16k right away.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Roger Pau Monné [Fri, 23 Oct 2020 08:13:14 +0000 (10:13 +0200)]
pci: cleanup MSI interrupts before removing device from IOMMU
Doing the MSI cleanup after removing the device from the IOMMU leads
to the following panic on AMD hardware:
Assertion 'table.ptr && (index < intremap_table_entries(table.ptr, iommu))' failed at iommu_intr.c:172
----[ Xen-4.13.1-10.0.3-d x86_64 debug=y Not tainted ]----
CPU: 3
RIP: e008:[<ffff82d08026ae3c>] drivers/passthrough/amd/iommu_intr.c#get_intremap_entry+0x52/0x7b
[...]
Xen call trace:
[<ffff82d08026ae3c>] R drivers/passthrough/amd/iommu_intr.c#get_intremap_entry+0x52/0x7b
[<ffff82d08026af25>] F drivers/passthrough/amd/iommu_intr.c#update_intremap_entry_from_msi_msg+0xc0/0x342
[<ffff82d08026ba65>] F amd_iommu_msi_msg_update_ire+0x98/0x129
[<ffff82d08025dd36>] F iommu_update_ire_from_msi+0x1e/0x21
[<ffff82d080286862>] F msi_free_irq+0x55/0x1a0
[<ffff82d080286f25>] F pci_cleanup_msi+0x8c/0xb0
[<ffff82d08025cf52>] F pci_remove_device+0x1af/0x2da
[<ffff82d0802a42d1>] F do_physdev_op+0xd18/0x1187
[<ffff82d080383925>] F pv_hypercall+0x1f5/0x567
[<ffff82d08038a432>] F lstar_enter+0x112/0x120
That's because the call to iommu_remove_device on AMD hardware will
remove the per-device interrupt remapping table, and hence the call to
pci_cleanup_msi done afterwards will find a null intremap table and
crash.
Reorder the calls so that MSI interrupts are torn down before removing
the device from the IOMMU.
Fixes: d7cfeb7c13ed ("AMD/IOMMU: don't blindly allocate interrupt remapping tables") Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Jan Beulich [Fri, 23 Oct 2020 08:12:31 +0000 (10:12 +0200)]
evtchn: let evtchn_set_priority() acquire the per-channel lock
Some lock wants to be held to make sure the port doesn't change state,
but there's no point holding the per-domain event lock here. Switch to
using the finer grained per-channel lock instead (albeit as a downside
for the time being this requires disabling interrupts for a short
period of time).
FAOD this doesn't guarantee anything towards in particular
evtchn_fifo_set_pending(), as for interdomain channels that function
would be called with the remote side's per-channel lock held.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Jan Beulich [Fri, 23 Oct 2020 08:11:46 +0000 (10:11 +0200)]
evtchn: rename and adjust guest_enabled_event()
The function isn't about an "event" in general, but about a vIRQ. The
function also failed to honor global vIRQ-s, instead assuming the caller
would pass vCPU 0 in such a case.
While at it also adjust the
- types the function uses,
- single user to make use of domain_vcpu().
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Jan Beulich [Fri, 23 Oct 2020 08:09:55 +0000 (10:09 +0200)]
evtchn: replace FIFO-specific header by generic private one
Having a FIFO specific header is not (or at least no longer) warranted
with just three function declarations left there. Introduce a private
header instead, moving there some further items from xen/event.h.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Jan Beulich [Fri, 23 Oct 2020 08:07:56 +0000 (10:07 +0200)]
evtchn: avoid race in get_xen_consumer()
There's no global lock around the updating of this global piece of data.
Make use of cmpxchgptr() to avoid two entities racing with their
updates.
While touching the functionality, mark xen_consumers[] read-mostly (or
else the if() condition could use the result of cmpxchgptr(), writing to
the slot unconditionally).
The use of cmpxchgptr() here points out (by way of clang warning about
it) that its original use of const was slightly wrong. Adjust the
placement, or else undefined behavior of const qualifying a function
type will result.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Jan Beulich [Fri, 23 Oct 2020 08:06:53 +0000 (10:06 +0200)]
IOMMU/EPT: avoid double flushing in shared page table case
While the flush coalescing optimization has been helping the non-shared
case, it has actually lead to double flushes in the shared case (which
ought to be the more common one nowadays at least): Once from
*_set_entry() and a second time up the call tree from wherever the
overriding flag gets played with. In alignment with XSA-346 suppress
flushing in this case.
Similarly avoid excessive setting of IOMMU_FLUSHF_added on the batched
flushes: "idx" hasn't been added a new mapping for.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Jan Beulich [Fri, 23 Oct 2020 08:06:20 +0000 (10:06 +0200)]
x86/mm: avoid playing with directmap when self-snoop can be relied upon
The set of systems affected by XSA-345 would have been smaller is we had
this in place already: When the processor is capable of dealing with
mismatched cacheability, there's no extra work we need to carry out.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Jan Beulich [Fri, 23 Oct 2020 08:05:29 +0000 (10:05 +0200)]
x86: XENMAPSPACE_gmfn{,_batch,_range} want to special case idx == gpfn
In this case up to now we've been freeing the page (through
guest_remove_page(), with the actual free typically happening at the
put_page() later in the function), but then failing the call on the
subsequent GFN consistency check. However, in my opinion such a request
should complete as an "expensive" no-op (leaving aside the potential
unsharing of the page).
This points out that f33d653f46f5 ("x86: replace bad ASSERT() in
xenmem_add_to_physmap_one()" would really have needed an XSA, despite
its description claiming otherwise, as in release builds we then put in
place a P2M entry referencing the about to be freed page.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Bertrand Marquis [Fri, 16 Oct 2020 13:58:47 +0000 (14:58 +0100)]
xen/arm: Print message if reset did not work
If for some reason the hardware reset is not working, print a message to
the user every 5 seconds to warn him that the system did not reset
properly and Xen is still looping.
The message is printed infinitely so that someone connecting to a serial
console with no history would see the message coming after 5 seconds.
arm: optee: don't print warning about "wrong" RPC buffer
The OP-TEE mediator tracks the cookie value of the last buffer which was
requested by OP-TEE. This tracked value serves one important purpose: if
OP-TEE wants to request another buffer, we know that it finished
importing the previous one and we can free the page list associated with
it.
Also, we had a false premise that OP_TEE frees requested buffers in
reversed order. So we checked rpc_data_cookie during handling of the
OPTEE_RPC_CMD_SHM_FREE call and printed a warning if the cookie of the
buffer which is requested to be freed differs from the last allocated
one.
During testing of RPMB FS services I discovered that RPMB code frees
request and response buffers in the same order is it allocated them. And
this is perfectly fine, actually.
So, we are removing mentioned warning message in Xen, as it is perfectly
normal to free buffers in arbitrary order.
Jan Beulich [Tue, 20 Oct 2020 12:23:12 +0000 (14:23 +0200)]
AMD/IOMMU: ensure suitable ordering of DTE modifications
DMA and interrupt translation should be enabled only after other
applicable DTE fields have been written. Similarly when disabling
translation or when moving a device between domains, translation should
first be disabled, before other entry fields get modified. Note however
that the "moving" aspect doesn't apply to the interrupt remapping side,
as domain specifics are maintained in the IRTEs here, not the DTE. We
also never disable interrupt remapping once it got enabled for a device
(the respective argument passed is always the immutable iommu_intremap).
This is part of XSA-347.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Paul Durrant <paul@xen.org>
Jan Beulich [Tue, 20 Oct 2020 12:22:52 +0000 (14:22 +0200)]
AMD/IOMMU: update live PTEs atomically
Updating a live PTE bitfield by bitfield risks the compiler re-ordering
the individual updates as well as splitting individual updates into
multiple memory writes. Construct the new entry fully in a local
variable, do the check to determine the flushing needs on the thus
established new entry, and then write the new entry by a single insn.
Similarly using memset() to clear a PTE is unsafe, as the order of
writes the function does is, at least in principle, undefined.
This is part of XSA-347.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Paul Durrant <paul@xen.org>
Jan Beulich [Tue, 20 Oct 2020 12:22:26 +0000 (14:22 +0200)]
AMD/IOMMU: convert amd_iommu_pte from struct to union
This is to add a "raw" counterpart to the bitfield equivalent. Take the
opportunity and
- convert fields to bool / unsigned int,
- drop the naming of the reserved field,
- shorten the names of the ignored ones.
This is part of XSA-347.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Paul Durrant <paul@xen.org>
Jan Beulich [Tue, 20 Oct 2020 12:21:32 +0000 (14:21 +0200)]
IOMMU: hold page ref until after deferred TLB flush
When moving around a page via XENMAPSPACE_gmfn_range, deferring the TLB
flush for the "from" GFN range requires that the page remains allocated
to the guest until the TLB flush has actually occurred. Otherwise a
parallel hypercall to remove the page would only flush the TLB for the
GFN it has been moved to, but not the one is was mapped at originally.
This is part of XSA-346.
Fixes: cf95b2a9fd5a ("iommu: Introduce per cpu flag (iommu_dont_flush_iotlb) to avoid unnecessary iotlb... ") Reported-by: Julien Grall <jgrall@amazon.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Julien Grall <jgrall@amazon.com>
Jan Beulich [Tue, 20 Oct 2020 12:21:09 +0000 (14:21 +0200)]
IOMMU: suppress "iommu_dont_flush_iotlb" when about to free a page
Deferring flushes to a single, wide range one - as is done when
handling XENMAPSPACE_gmfn_range - is okay only as long as
pages don't get freed ahead of the eventual flush. While the only
function setting the flag (xenmem_add_to_physmap()) suggests by its name
that it's only mapping new entries, in reality the way
xenmem_add_to_physmap_one() works means an unmap would happen not only
for the page being moved (but not freed) but, if the destination GFN is
populated, also for the page being displaced from that GFN. Collapsing
the two flushes for this GFN into just one (end even more so deferring
it to a batched invocation) is not correct.
This is part of XSA-346.
Fixes: cf95b2a9fd5a ("iommu: Introduce per cpu flag (iommu_dont_flush_iotlb) to avoid unnecessary iotlb... ") Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Paul Durrant <paul@xen.org> Acked-by: Julien Grall <jgrall@amazon.com>
Hongyan Xia [Sat, 11 Jan 2020 21:57:43 +0000 (21:57 +0000)]
x86/mm: Prevent some races in hypervisor mapping updates
map_pages_to_xen will attempt to coalesce mappings into 2MiB and 1GiB
superpages if possible, to maximize TLB efficiency. This means both
replacing superpage entries with smaller entries, and replacing
smaller entries with superpages.
Unfortunately, while some potential races are handled correctly,
others are not. These include:
1. When one processor modifies a sub-superpage mapping while another
processor replaces the entire range with a superpage.
Take the following example:
Suppose L3[N] points to L2. And suppose we have two processors, A and
B.
* A walks the pagetables, get a pointer to L2.
* B replaces L3[N] with a 1GiB mapping.
* B Frees L2
* A writes L2[M] #
This is race exacerbated by the fact that virt_to_xen_l[21]e doesn't
handle higher-level superpages properly: If you call virt_xen_to_l2e
on a virtual address within an L3 superpage, you'll either hit a BUG()
(most likely), or get a pointer into the middle of a data page; same
with virt_xen_to_l1 on a virtual address within either an L3 or L2
superpage.
So take the following example:
* A reads pl3e and discovers it to point to an L2.
* B replaces L3[N] with a 1GiB mapping
* A calls virt_to_xen_l2e() and hits the BUG_ON() #
2. When two processors simultaneously try to replace a sub-superpage
mapping with a superpage mapping.
Take the following example:
Suppose L3[N] points to L2. And suppose we have two processors, A and B,
both trying to replace L3[N] with a superpage.
* A walks the pagetables, get a pointer to pl3e, and takes a copy ol3e pointing to L2.
* B walks the pagetables, gets a pointre to pl3e, and takes a copy ol3e pointing to L2.
* A writes the new value into L3[N]
* B writes the new value into L3[N]
* A recursively frees all the L1's under L2, then frees L2
* B recursively double-frees all the L1's under L2, then double-frees L2 #
Fix this by grabbing a lock for the entirety of the mapping update
operation.
Rather than grabbing map_pgdir_lock for the entire operation, however,
repurpose the PGT_locked bit from L3's page->type_info as a lock.
This means that rather than locking the entire address space, we
"only" lock a single 512GiB chunk of hypervisor address space at a
time.
There was a proposal for a lock-and-reverify approach, where we walk
the pagetables to the point where we decide what to do; then grab the
map_pgdir_lock, re-verify the information we collected without the
lock, and finally make the change (starting over again if anything had
changed). Without being able to guarantee that the L2 table wasn't
freed, however, that means every read would need to be considered
potentially unsafe. Thinking carefully about that is probably
something that wants to be done on public, not under time pressure.
This is part of XSA-345.
Reported-by: Hongyan Xia <hongyxia@amazon.com> Signed-off-by: Hongyan Xia <hongyxia@amazon.com> Signed-off-by: George Dunlap <george.dunlap@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Jan Beulich [Tue, 20 Oct 2020 06:54:59 +0000 (08:54 +0200)]
SVM: avoid VMSAVE in ctxt-switch-to
Of the state saved by the insn and reloaded by the corresponding VMLOAD
- TR and syscall state are invariant while having Xen's state loaded,
- sysenter is unused altogether by Xen,
- FS, GS, and LDTR are not used by Xen and get suitably set in PV
context switch code.
Note that state is suitably populated in _svm_cpu_up(); a minimal
respective assertion gets added.
Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Igor Druzhinin [Tue, 20 Oct 2020 06:54:23 +0000 (08:54 +0200)]
hvmloader: flip "ACPI data" to "ACPI NVS" type for ACPI table region
ACPI specification contains statements describing memory marked with regular
"ACPI data" type as reclaimable by the guest. Although the guest shouldn't
really do it if it wants kexec or similar functionality to work, there
could still be ambiguities in treating these regions as potentially regular
RAM.
One such example is SeaBIOS which currently reports "ACPI data" regions as
RAM to the guest in its e801 call. Which it might have the right to do as any
user of this is expected to be ACPI unaware. But a QEMU bootloader later seems
to ignore that fact and is instead using e801 to find a place for initrd which
causes the tables to be erased. While arguably QEMU bootloader or SeaBIOS need
to be fixed / improved here, that is just one example of the potential problems
from using a reclaimable memory type.
Flip the type to "ACPI NVS" which doesn't have this ambiguity in it and is
described by the spec as non-reclaimable (so cannot ever be treated like RAM).
Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com>
Jan Beulich [Tue, 20 Oct 2020 06:53:53 +0000 (08:53 +0200)]
xen-detect: make CPUID fallback CPUID-faulting aware
Relying on presence / absence of hypervisor leaves in raw / escaped
CPUID output cannot be used to tell apart PV and HVM on CPUID faulting
capable hardware. Utilize a PV-only feature flag to avoid false positive
HVM detection.
While at it also short circuit the main detection loop: For PV, only
the base group of leaves can possibly hold hypervisor information.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Wei Liu <wl@xen.org>
Jan Beulich [Tue, 20 Oct 2020 06:52:53 +0000 (08:52 +0200)]
EFI: free unused boot mem in at least some cases
Address at least the primary reason why 52bba67f8b87 ("efi/boot: Don't
free ebmalloc area at all") was put in place: Make xen_in_range() aware
of the freed range. This is in particular relevant for EFI-enabled
builds not actually running on EFI, as the entire range will be unused
in this case.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Bertrand Marquis [Thu, 15 Oct 2020 09:16:09 +0000 (10:16 +0100)]
tools/xenpmd: Fix gcc10 snprintf warning
Add a check for snprintf return code and ignore the entry if we get an
error. This should in fact never happen and is more a trick to make gcc
happy and prevent compilation errors.
This is solving the following gcc warning when compiling for arm32 host
platforms with optimization activated:
xenpmd.c:92:37: error: '%s' directive output may be truncated writing
between 4 and 2147483645 bytes into a region of size 271
[-Werror=format-truncation=]
This is also solving the following Debian bug:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=970802
Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com> Acked-by: Wei Liu <wl@xen.org>
Elliott Mitchell [Mon, 12 Oct 2020 01:11:39 +0000 (18:11 -0700)]
tools/python: Pass linker to Python build process
Unexpectedly the environment variable which needs to be passed is
$LDSHARED and not $LD. Otherwise Python may find the build `ld` instead
of the host `ld`.
Replace $(LDFLAGS) with $(SHLIB_LDFLAGS) as Python needs shared objects
it can load at runtime, not executables.
This uses $(CC) instead of $(LD) since Python distutils appends $CFLAGS
to $LDFLAGS which breaks many linkers.
Signed-off-by: Elliott Mitchell <ehem+xen@m5p.com> Acked-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
tools/libs/stat: use memcpy instead of strncpy in getBridge
Use memcpy in getBridge to prevent gcc warnings about truncated
strings. We know that we might truncate it, so the gcc warning
here is wrong.
Revert previous change changing buffer sizes as bigger buffers
are not needed.
Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com> Acked-by: Wei Liu <wl@xen.org>
Andrew Cooper [Mon, 5 Oct 2020 11:46:30 +0000 (12:46 +0100)]
x86/smpboot: Don't unconditionally call memguard_guard_stack() in cpu_smpboot_alloc()
cpu_smpboot_alloc() is designed to be idempotent with respect to partially
initialised state. This occurs for S3 and CPU parking, where enough state to
handle NMIs/#MCs needs to remain valid for the entire lifetime of Xen, even
when we otherwise want to offline the CPU.
For simplicity between various configuration, Xen always uses shadow stack
mappings (Read-only + Dirty) for the guard page, irrespective of whether
CET-SS is enabled.
Unfortunately, the CET-SS changes in memguard_guard_stack() broke idempotency
by first writing out the supervisor shadow stack tokens with plain writes,
then changing the mapping to being read-only.
This ordering is strictly necessary to configure the BSP, which cannot have
the supervisor tokens be written with WRSS.
Instead of calling memguard_guard_stack() unconditionally, call it only when
actually allocating a new stack. Xenheap allocates are guaranteed to be
writeable, and the net result is idempotency WRT configuring stack_base[].
Fixes: 91d26ed304f ("x86/shstk: Create shadow stacks") Reported-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Andrew Cooper [Mon, 12 Oct 2020 13:58:45 +0000 (14:58 +0100)]
x86/ucode/intel: Improve description for gathering the microcode revision
Obtaining the microcode revision on Intel CPUs is complicated for backwards
compatibility reasons. Update apply_microcode() to use a slightly more
efficient CPUID invocation, now that the documentation has been updated to
confirm that any CPUID instruction is fine, not just CPUID.1
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Andrew Cooper [Mon, 12 Oct 2020 12:24:31 +0000 (13:24 +0100)]
x86/traps: 'Fix' safety of read_registers() in #DF path
All interrupts and exceptions pass a struct cpu_user_regs up into C. This
contains the legacy vm86 fields from 32bit days, which are beyond the
hardware-pushed frame.
Accessing these fields is generally illegal, as they are logically out of
bounds for anything other than an interrupt/exception hitting ring1/3 code.
show_registers() unconditionally reads these fields, but the content is
discarded before use. This is benign right now, as all parts of the stack are
readable, including the guard pages.
However, read_registers() in the #DF handler writes to these fields as part of
preparing the state dump, and being IST, hits the adjacent stack frame.
This has been broken forever, but c/s 6001660473 "x86/shstk: Rework the stack
layout to support shadow stacks" repositioned the #DF stack to be adjacent to
the guard page, which turns this OoB write into a fatal pagefault:
(XEN) *** DOUBLE FAULT ***
(XEN) ----[ Xen-4.15-unstable x86_64 debug=y Tainted: C ]----
(XEN) ----[ Xen-4.15-unstable x86_64 debug=y Tainted: C ]----
(XEN) CPU: 4
(XEN) RIP: e008:[<ffff82d04031fd4f>] traps.c#read_registers+0x29/0xc1
(XEN) RFLAGS: 0000000000050086 CONTEXT: hypervisor (d1v0)
...
(XEN) Xen call trace:
(XEN) [<ffff82d04031fd4f>] R traps.c#read_registers+0x29/0xc1
(XEN) [<ffff82d0403207b3>] F do_double_fault+0x3d/0x7e
(XEN) [<ffff82d04039acd7>] F double_fault+0x107/0x110
(XEN)
(XEN) Pagetable walk from ffff830236f6d008:
(XEN) L4[0x106] = 80000000bfa9b063ffffffffffffffff
(XEN) L3[0x008] = 0000000236ffd063ffffffffffffffff
(XEN) L2[0x1b7] = 0000000236ffc063ffffffffffffffff
(XEN) L1[0x16d] = 8000000236f6d161ffffffffffffffff
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 4:
(XEN) FATAL PAGE FAULT
(XEN) [error_code=0003]
(XEN) Faulting linear address: ffff830236f6d008
(XEN) ****************************************
(XEN)
and rendering the main #DF analysis broken.
The proper fix is to delete cpu_user_regs.es and later, so no
interrupt/exception path can access OoB, but this needs disentangling from the
PV ABI first.
Not-really-fixes: 6001660473 ("x86/shstk: Rework the stack layout to support shadow stacks") Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Jan Beulich [Thu, 15 Oct 2020 15:18:25 +0000 (17:18 +0200)]
tools/gdbsx: drop stray recursion into tools/include/
Doing so isn't appropriate here - this gets done very early in the build
process. If the directory is mean to to be buildable on its own,
different arrangements would be needed.
The issue has become more pronounced by 47654a0d7320 ("tools/include:
fix (drop) dependencies of when to populate xen/"), but was there before
afaict.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>
Jan Beulich [Thu, 15 Oct 2020 10:30:01 +0000 (12:30 +0200)]
EFI: further "need_to_free" adjustments
When processing "chain" directives, the previously loaded config file
gets freed. This needs to be recorded accordingly such that no error
path would try to free the same block of memory a 2nd time.
Furthermore, neither .addr nor .size being zero has any meaning towards
the need to free an allocated chunk anymore. Drop (from read_file()) and
replace (in Arm's efi_arch_use_config_file(), to sensibly retain the
comment) respective assignments.
Chen Yu [Thu, 15 Oct 2020 10:29:11 +0000 (12:29 +0200)]
x86/mwait-idle: customize IceLake server support
On ICX platform, the C1E auto-promotion is enabled by default.
As a result, the CPU might fall into C1E more offen than previous
platforms. So disable C1E auto-promotion and expose C1E as a separate
idle state.
Beside C1 and C1E, the exit latency of C6 was measured
by a dedicated tool. However the exit latency(41us) exposed
by _CST is much smaller than the one we measured(128us). This
is probably due to the _CST uses the exit latency when woken
up from PC0+C6, rather than PC6+C6 when C6 was measured. Choose
the latter as we need the longest latency in theory.
Signed-off-by: Chen Yu <yu.c.chen@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
[Linux commit a472ad2bcea479ba068880125d7273fc95c14b70] Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com>
Michal Orzel [Wed, 14 Oct 2020 10:05:41 +0000 (12:05 +0200)]
xen/arm: Document the erratum #853709 related to Cortex A72
The Cortex-A72 erratum #853709 is the same as the Cortex-A57
erratum #852523. As the latter is already workaround, we only
need to update the documentation.
Signed-off-by: Michal Orzel <michal.orzel@arm.com>
[julieng: Reworded the commit message] Reviewed-by: Julien Grall <jgrall@amazon.com> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
Jan Beulich [Wed, 14 Oct 2020 12:13:16 +0000 (14:13 +0200)]
EFI/Arm64: don't clobber DTB pointer
read_section() needs to be more careful: efi_arch_use_config_file()
may have found a DTB file (but without modules), and there may be no DTB
specified in the EFI config file. In this case the pointer to the blob
must not be overwritten with NULL when no ".dtb" section is present
either.
Fixes: 8a71d50ed40b ("efi: Enable booting unified hypervisor/kernel/initrd images") Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Jan Beulich [Wed, 14 Oct 2020 12:11:49 +0000 (14:11 +0200)]
kexec: some #include adjustments
In the context of working on x86's elf_core_save_regs() I noticed there
were far more source files getting rebuilt than I would have expected.
While the main offender looks to have been fixmap.h including kexec.h,
also drop use of elfcore.h from kexec.h.
While adjusting machine_kexec.c also replace use of guest_access.h by
domain_page.h.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Note that this is done on the generic MSR handler, and PV guest will
also get 0 back when trying to read the MSR. There doesn't seem to be
much value in handling the MSR for HVM guests only.
Fixes: 84e848fd7a1 ('x86/hvm: disallow access to unknown MSRs') Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com>
Jan Beulich [Wed, 14 Oct 2020 12:05:10 +0000 (14:05 +0200)]
x86/vLAPIC: vlapic_init() runs only once for a vCPU
Hence there's no need to guard allocation / mapping by checks whether
the same action has been done before. I assume this was a transient
change which should have been undone before 509529e99148 ("x86 hvm: Xen
interface and implementation for virtual S3") got committed.
While touching this code, drop the pretty useless dprintk()-s.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Jan Beulich [Wed, 14 Oct 2020 12:03:38 +0000 (14:03 +0200)]
x86: fix resource leaks on arch_vcpu_create() error path
{hvm,pv}_vcpu_initialise() have always kind of been meant to be the
final possible source of errors in arch_vcpu_create(), hence not
requiring any unrolling of what they've done on the error path. (Of
course this may change once the various involved paths all have become
idempotent.)
But even beyond this aspect I think it is more logical to do policy
initialization ahead of the calling of these two functions, as they may
in principle want to access it.
Fixes: 4187f79dc718 ("x86/msr: introduce struct msr_vcpu_policy") Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Jan Beulich [Wed, 14 Oct 2020 12:02:20 +0000 (14:02 +0200)]
tools/include: adjust x86-specific population of xen/
There's no need to use a shell loop construct here - ln's destination
can be specified as just the intended directory, as we don't mean to
change the names of any of the files. Also drop XEN_LIB_X86_INCLUDES for
having a single use only, and don't pass -f to ln, to allow noticing
name collisions right away.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Tested-by: Bertrand Marquis <bertrand.marquis@arm.com> Acked-by: Wei Liu <wl@xen.org>
Jan Beulich [Wed, 14 Oct 2020 12:01:43 +0000 (14:01 +0200)]
tools/include: adjust population of public headers into xen/
Use a wildcard also for the subdirectories, drop XEN_PUBLIC_INCLUDES for
having a single use only, and don't pass -f to ln to allow noticing name
collisions right away, and add trailing slashes to ln's destination.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Tested-by: Bertrand Marquis <bertrand.marquis@arm.com> Acked-by: Wei Liu <wl@xen.org>
Jan Beulich [Wed, 14 Oct 2020 12:01:25 +0000 (14:01 +0200)]
tools/include: fix (drop) dependencies of when to populate xen/
Making the population of xen/ depend on the time stamps of a subset of
the headers put there is error prone. The creation of a few dozen
symlinks doesn't take a meaningful amount of time (compared to the
overall building of tools/), and hence - to be on the safe side - should
simply be done always. Convert the goal to a phony one and drop its
dependencies, effectively taking further what 8d8d7d6b3dc1 ("tools: fix
linking hypervisor includes to tools include directory") had already
attempted.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Tested-by: Bertrand Marquis <bertrand.marquis@arm.com> Acked-by: Wei Liu <wl@xen.org>
Jan Beulich [Wed, 14 Oct 2020 12:01:00 +0000 (14:01 +0200)]
tools/include: adjust population of acpi/
Limit what gets exposed - in particular cpufreq/ and apei.h are
hypervisor private headers which the tool stack building shouldn't see
or use. Also don't pass -f to ln, to allow noticing name collisions
right away.
Additionally acpi/ also has been in need of deleting at the start of
the rule, or alternatively the respective ln would have needed to also
be passed -n.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Tested-by: Bertrand Marquis <bertrand.marquis@arm.com> Acked-by: Wei Liu <wl@xen.org>
Modify Makefiles using $(XEN_ROOT)/xen to use $(BASEDIR) instead.
This is removing the dependency to xen subdirectory preventing using a
wrong configuration file when xen subdirectory is duplicated for
compilation tests.
BASEDIR is set in xen/lib/x86/Makefile as this Makefile is directly
called from the tools build and install process and BASEDIR is not set
there.
Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com> Acked-by: Jan Beulich <jbeulich@suse.com> Acked-by: Wei Liu <wl@xen.org>
There is a standard format for generated Go code header comments, as set
by [1]. Modify gengotypes.py to follow this standard, and use the
additional
// source: <IDL file basename>
convention used by protoc-gen-go.
This change is motivated by the fact that since 41aea82de2, the comment
would include the absolute path to libxl_types.idl, therefore creating
unintended diffs when generating code across different machines. This
approach fixes that problem.
[1] https://github.com/golang/go/issues/13560
Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com> Reviewed-by: George Dunlap <george.dunlap@citrix.com>
Nick Rosbrook [Sun, 11 Oct 2020 23:31:24 +0000 (19:31 -0400)]
golang/xenlight: do not hard code libxl dir in gengotypes.py
Currently, in order to 'import idl' in gengotypes.py, we derive the path
of the libxl source directory from the XEN_ROOT environment variable, and
append that to sys.path so python can see idl.py. Since the the recent move of
libxl to tools/libs/light, this hard coding breaks the build.
Instead, check for the environment variable LIBXL_SRC_DIR, but move this
check to a try-except block (with empty except). This simply makes the
real error more visible, and does not strictly require that
LIBXL_SRC_DIR is used. Finally, update the Makefile to set LIBXL_SRC_DIR
rather than XEN_ROOT when calling gengotypes.py.
Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com> Reviewed-by: George Dunlap <george.dunlap@citrix.com>
Jason Andryuk [Thu, 1 Oct 2020 23:53:37 +0000 (19:53 -0400)]
libxl: only query VNC when enabled
QEMU without VNC support (configure --disable-vnc) will return an error
when VNC is queried over QMP since it does not recognize the QMP
command. This will cause libxl to fail starting the domain even if VNC
is not enabled. Therefore only query QEMU for VNC support when using
VNC, so a VNC-less QEMU will function in this configuration.
'goto out' jumps to the call to device_model_postconfig_done(), the
final callback after the chain of vnc queries. This bypasses all the
QMP VNC queries.
Signed-off-by: Jason Andryuk <jandryuk@gmail.com> Acked-by: Wei Liu <wl@xen.org>
Jan Beulich [Fri, 2 Oct 2020 10:30:34 +0000 (12:30 +0200)]
x86/vLAPIC: don't leak regs page from vlapic_init() upon error
Fixes: 8a981e0bf25e ("Make map_domain_page_global fail") Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
configuration file, the Linux kernel and initrd, as well as the XSM,
and architectural specific files into a single "unified" EFI executable.
This allows an administrator to update the components independently
without requiring rebuilding xen, as well as to replace the components
in an existing image.
The resulting EFI executable can be invoked directly from the UEFI Boot
Manager, removing the need to use a separate loader like grub as well
as removing dependencies on local filesystem access. And since it is
a single file, it can be signed and validated by UEFI Secure Boot without
requring the shim protocol.
It is inspired by systemd-boot's unified kernel technique and borrows the
function to locate PE sections from systemd's LGPL'ed code. During EFI
boot, Xen looks at its own loaded image to locate the PE sections for
the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
(`.ramdisk`), and XSM config (`.xsm`), which are included after building
xen.efi using objcopy to add named sections for each input file.
For x86, the CPU ucode can be included in a section named `.ucode`,
which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
On ARM systems the Device Tree can be included in a section named
`.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
the boot process.
Note that the system will fall back to loading files from disk if
the named sections do not exist. This allows distributions to continue
with the status quo if they want a signed kernel + config, while still
allowing a user provided initrd (which is how the shim protocol currently
works as well).
This patch also adds constness to the section parameter of
efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
changes pe_find_section() to use a const CHAR16 section name,
and adds pe_name_compare() to match section names.
Signed-off-by: Trammell Hudson <hudson@trmm.net> Reviewed-by: Jan Beulich <jbeulich@suse.com>
[Fix ARM build by including pe.init.o] Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Trammell Hudson [Fri, 2 Oct 2020 11:18:19 +0000 (07:18 -0400)]
efi/boot.c: add file.need_to_free
The config file, kernel, initrd, etc should only be freed if they
are allocated with the UEFI allocator. On x86 the ucode, and on
ARM the dtb, are also marked as need_to_free when allocated or
expanded.
This also fixes a memory leak in ARM fdt_increase_size() if there
is an error in building the new device tree.
Signed-off-by: Trammell Hudson <hudson@trmm.net> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Andrew Cooper [Wed, 1 Apr 2020 14:51:08 +0000 (15:51 +0100)]
x86/ucode: Trivial further cleanup
* Drop unused include in private.h.
* Used explicit width integers for Intel header fields.
* Adjust comment to better describe the extended header.
* Drop unnecessary __packed attribute for AMD header.
* Fix types and style.
No functional change.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Andrew Cooper [Fri, 2 Oct 2020 11:20:44 +0000 (12:20 +0100)]
x86/hvm: Correct error message in check_segment()
The error message is wrong (given AMD's older interpretation of what a NUL
segment should contain, attribute wise), and actively unhelpful because you
only get it in response to a hypercall where the one piece of information you
cannot provide is the segment selector.
Fix the message to talk about segment attributes, rather than the selector.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com>
Juergen Gross [Fri, 2 Oct 2020 15:41:39 +0000 (17:41 +0200)]
tools/libs/store: drop read-only functionality
Today it is possible to open the connection in read-only mode via
xs_daemon_open_readonly(). This is working only with Xenstore running
as a daemon in the same domain as the user. Additionally it doesn't
add any security as accessing the socket used for that functionality
requires the same privileges as the socket used for full Xenstore
access.
So just drop the read-only semantics in all cases, leaving the
interface existing only for compatibility reasons. This in turn
requires to just ignore the XS_OPEN_READONLY flag.
Signed-off-by: Juergen Gross <jgross@suse.com> Acked-by: Wei Liu <wl@xen.org>
Juergen Gross [Fri, 2 Oct 2020 15:41:38 +0000 (17:41 +0200)]
tools/libs/store: ignore XS_OPEN_SOCKETONLY flag
When opening the connection to Xenstore via xs_open() it makes no
sense to limit the connection to the socket based one. So just ignore
the XS_OPEN_SOCKETONLY flag.
Signed-off-by: Juergen Gross <jgross@suse.com> Acked-by: Wei Liu <wl@xen.org>
Juergen Gross [Fri, 2 Oct 2020 15:41:37 +0000 (17:41 +0200)]
tools/xenstore: remove socket-only option from xenstore client
The Xenstore access commands (xenstore-*) have the possibility to limit
connection to Xenstore to a local socket (option "-s"). This is an
option making no sense at all, as either there is only a socket, so
the option would be a nop, or there is no socket at all (in case
Xenstore is running in a stubdom or the client is called in a domU),
so specifying the option would just lead to failure.
So drop that option completely.
Signed-off-by: Juergen Gross <jgross@suse.com> Acked-by: Wei Liu <wl@xen.org>
Modify Makefiles using $(XEN_ROOT)/xen to use $(BASEDIR) instead.
This is removing the dependency to xen subdirectory preventing using a
wrong configuration file when xen subdirectory is duplicated for
compilation tests.
Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com> Acked-by: Jan Beulich <jbeulich@suse.com>
Roger Pau Monne [Thu, 20 Aug 2020 15:16:27 +0000 (17:16 +0200)]
x86/vpic: also execute dpci callback for non-specific EOI
Currently the dpci EOI callback is only executed for specific EOIs.
This is wrong as non-specific EOIs will also clear the ISR bit and
thus end the interrupt. Re-arrange the code a bit so that the common
EOI handling path can be shared between all EOI modes.
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Andrew Cooper [Fri, 2 Oct 2020 17:49:32 +0000 (18:49 +0100)]
x86/S3: Restore CR4 earlier during resume
c/s 4304ff420e5 "x86/S3: Drop {save,restore}_rest_processor_state()
completely" moved CR4 restoration up into C, to account for the fact that MCE
was explicitly handled later.
However, time_resume() ends up making an EFI Runtime Service call, and EFI
explodes without OSFXSR, presumably when trying to spill %xmm registers onto
the stack.
Given this codepath, and the potential for other issues of a similar kind (TLB
flushing vs INVPCID, HVM logic vs VMXE, etc), restore CR4 in asm before
entering C.
Ignore the previous MCE special case, because its not actually necessary. The
handler is already suitably configured from before suspend.
Fixes: 4304ff420e5 ("x86/S3: Drop {save,restore}_rest_processor_state() completely") Reported-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Tested-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
As of 2d0557c5cbeb ("x86: Fold page_info lock into type_info") we
haven't been updating guest page table entries through linear page
tables anymore. All updates have been using domain mappings since then.
Drop the use of guest/user access helpers there, and hence also the
boolean return values of the involved functions.
update_intpte(), otoh, gets its boolean return type retained for now,
as we may want to bound the CMPXCHG retry loop, indicating failure to
the caller instead when the retry threshold got exceeded.
With this {,__}cmpxchg_user() become unused, so they too get dropped.
(In fact, dropping them was the motivation of making the change.)
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Tim Deegan <tim@xen.org>
Andrew Cooper [Wed, 30 Sep 2020 09:17:33 +0000 (10:17 +0100)]
x86/cpuid: Move VMX/SVM out of the default policy
Nested virt is still experimental, and requires explicitly opting in to at
domain create time. The VMX/SVM features should not be visible by default.
Also correct them from all HVM guests, to just HAP-enabled guests. This has
been the restriction for SVM right from the outset (c/s e006a0e0aaa), while
VMX was first introduced supporting shadow mode (c/s 9122c69c8d3) but later
adjusted to HAP-only (c/s 77751ed79e3).
There is deliberately no adjustment to xc_cpuid_apply_policy() for pre-4.14
migration compatibility. The migration stream doesn't contain the required
architectural state for either VMX/SVM, and a nested virt VM which migrates
will explode in weird and wonderful ways.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Andrew Cooper [Tue, 29 Sep 2020 15:51:07 +0000 (16:51 +0100)]
x86/hvm: Drop restore boolean from hvm_cr4_guest_valid_bits()
Previously, migration was reordered so the CPUID data was available before
register state. nestedhvm_enabled() has recently been made accurate for the
entire lifetime of the domain.
Therefore, we can drop the bodge in hvm_cr4_guest_valid_bits() which existed
previously to tolerate a guests' CR4 being set/restored before
HVM_PARAM_NESTEDHVM.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Andrew Cooper [Tue, 29 Sep 2020 15:51:07 +0000 (16:51 +0100)]
x86/hvm: Obsolete the use of HVM_PARAM_NESTEDHVM
With XEN_DOMCTL_CDF_nested_virt now passed properly to domain_create(),
reimplement nestedhvm_enabled() to use the property which is fixed for the
lifetime of the domain.
This makes the call to nestedhvm_vcpu_initialise() from hvm_vcpu_initialise()
no longer dead. It became logically dead with the Xend => XL transition, as
they initialise HVM_PARAM_NESTEDHVM in opposite orders with respect to
XEN_DOMCTL_max_vcpus.
There is one opencoded user of nestedhvm_enabled() in HVM_PARAM_ALTP2M's
safety check.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Wei Liu <wl@xen.org>
Andrew Cooper [Tue, 28 Aug 2018 14:30:14 +0000 (14:30 +0000)]
xen/domctl: Introduce and use XEN_DOMCTL_CDF_nested_virt
Like other major areas of functionality, nested virt (or not) needs to be
known at domain creation time for sensible CPUID handling, and wants to be
known this early for sensible infrastructure handling in Xen.
Introduce XEN_DOMCTL_CDF_nested_virt and modify libxl to set it appropriately
when creating domains. There is no need to adjust the ARM logic to reject the
use of this new flag.
No functional change yet.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Wei Liu <wl@xen.org> Acked-by: Christian Lindig <christian.lindig@citrix.com>
Andrew Cooper [Tue, 29 Sep 2020 15:56:35 +0000 (16:56 +0100)]
xen/domctl: Simplify DOMCTL_CDF_ checking logic
Introduce some local variables to make the resulting logic easier to follow.
Join the two IOMMU checks in sanitise_domain_config(). Tweak some of the
terminology for better accuracy.
No functional change.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Wei Liu <wl@xen.org>
Andrew Cooper [Tue, 29 Sep 2020 17:39:08 +0000 (18:39 +0100)]
tools/libxl: Simplify DOMCTL_CDF_ flags handling in libxl__domain_make()
The use of the ternary operator serves only to obfuscate the code. Rewrite it
in more simple terms, avoiding the need to conditionally OR zero into the
flags.
No functional change.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Wei Liu <wl@xen.org>
Andrew Cooper [Fri, 2 Oct 2020 17:23:42 +0000 (18:23 +0100)]
tools/libxl: Work around libvirt breakage in libxl__cpuid_legacy()
OSSTest reports that libvirt is reliably regressed.
The only possible option is a side effect of using libxl_defbool_val(), which
can only be the assert() within. Unfortunately, libvirt actually crashes in
__vfscanf_internal() while presumably trying to render some form of error.
Open code the check without the assert() to unblock staging, while we
investigate what is going on with libvirt. This will want reverting at some
point in the future.
Not-really-fixes: bfcc97c08c ("tools/cpuid: Plumb nested_virt down into xc_cpuid_apply_policy()" reliably breaks libvirt.) Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Ian Jackson <iwj@xenproject.org>
Laurentiu Tudor [Fri, 2 Oct 2020 10:33:44 +0000 (13:33 +0300)]
arm,smmu: match start level of page table walk with P2M
Don't hardcode the lookup start level of the page table walk to 1
and instead match the one used in P2M. This should fix scenarios
involving SMMU where the start level is different than 1.
In order for the SMMU driver to also compile on arm32 move the
P2M_ROOT_LEVEL in the p2m header file (while at it, for
consistency also P2M_ROOT_ORDER) and use the macro in the smmu
driver.
Jan Beulich [Fri, 2 Oct 2020 06:37:35 +0000 (08:37 +0200)]
evtchn/fifo: use stable fields when recording "last queue" information
Both evtchn->priority and evtchn->notify_vcpu_id could change behind the
back of evtchn_fifo_set_pending(), as for it - in the case of
interdomain channels - only the remote side's per-channel lock is held.
Neither the queue's priority nor the vCPU's vcpu_id fields have similar
properties, so they seem better suited for the purpose. In particular
they reflect the respective evtchn fields' values at the time they were
used to determine queue and vCPU.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Julien Grall <jgrall@amazon.com> Reviewed-by: Paul Durrant <paul@xen.org>
Jan Beulich [Fri, 2 Oct 2020 06:37:04 +0000 (08:37 +0200)]
evtchn: cut short evtchn_reset()'s loop in the common case
The general expectation is that there are only a few open ports left
when a domain asks its event channel configuration to be reset.
Similarly on average half a bucket worth of event channels can be
expected to be inactive. Try to avoid iterating over all channels, by
utilizing usage data we're maintaining anyway.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Paul Durrant <paul@xen.org> Acked-by: Julien Grall <jgrall@amazon.com>
Jan Beulich [Fri, 2 Oct 2020 06:36:21 +0000 (08:36 +0200)]
evtchn/Flask: pre-allocate node on send path
xmalloc() & Co may not be called with IRQs off, or else check_lock()
will have its assertion trigger about locks getting acquired
inconsistently. Re-arranging the locking in evtchn_send() doesn't seem
very reasonable, especially since the per-channel lock was introduced to
avoid acquiring the per-domain event lock on the send paths. Issue a
second call to xsm_evtchn_send() instead, before acquiring the lock, to
give XSM / Flask a chance to pre-allocate whatever it may need.
As these nodes are used merely for caching earlier decisions' results,
allocate just one node in AVC code despite two potentially being needed.
Things will merely be not as performant if a second allocation was
wanted, just like when the pre-allocation fails.
Fixes: c0ddc8634845 ("evtchn: convert per-channel lock to be IRQ-safe") Signed-off-by: Jan Beulich <jbeulich@suse.com> Tested-by: Jason Andryuk <jandryuk@gmail.com> Acked-by: Julien Grall <jgrall@amazon.com> Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
Jan Beulich [Fri, 2 Oct 2020 06:34:28 +0000 (08:34 +0200)]
x86/shim: fix build with PV_SHIM_EXCLUSIVE and SHADOW_PAGING
While there's little point in enabling both, the combination ought to at
least build correctly. Drop the direct PV_SHIM_EXCLUSIVE conditionals
and instead zap PG_log_dirty to zero under the right conditions, and key
other #ifdef-s off of that.
While there also expand on ded576ce07e9 ("x86/shadow: dirty VRAM
tracking is needed for HVM only"): There was yet another is_hvm_domain()
missing, and code touching the struct fields needs to be guarded by
suitable #ifdef-s as well. While there also guard shadow-mode-only
fields accordingly.
Fixes: 8b5b49ceb3d9 ("x86: don't include domctl and alike in shim-exclusive builds") Reported-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Juergen Gross [Thu, 1 Oct 2020 10:57:43 +0000 (12:57 +0200)]
tools/lixenguest: hide struct elf_dom_parms layout from users
Don't include struct elf_dom_parms in struct xc_dom_image, but rather
use a pointer to reference it. Together with adding accessor functions
for the externally needed elements this enables to drop including the
Xen private header xen/libelf/libelf.h from xenguest.h.
Fixes: 7e0165c19387 ("tools/libxc: untangle libxenctrl from libxenguest") Signed-off-by: Juergen Gross <jgross@suse.com> Acked-by: Wei Liu <wl@xen.org>
Juergen Gross [Thu, 1 Oct 2020 10:57:43 +0000 (12:57 +0200)]
tools/libxenguest: make xc_dom_loader interface private to libxenguest
The pluggable kernel loader interface is used only internally of
libxenguest, so make it private. This removes a dependency on the Xen
internal header xen/libelf/libelf.h from xenguest.h.
Signed-off-by: Juergen Gross <jgross@suse.com> Acked-by: Wei Liu <wl@xen.org>
Juergen Gross [Thu, 1 Oct 2020 10:57:43 +0000 (12:57 +0200)]
tools/libs: merge xenctrl_dom.h into xenguest.h
Today xenctrl_dom.h is part of libxenctrl as it is included by
xc_private.c. This seems not to be needed, so merge xenctrl_dom.h into
xenguest.h where its contents really should be.
Replace all #includes of xenctrl_dom.h by xenguest.h ones or drop them
if xenguest.h is already included.
Signed-off-by: Juergen Gross <jgross@suse.com> Acked-by: Wei Liu <wl@xen.org>
Andrew Cooper [Mon, 21 Sep 2020 12:17:30 +0000 (13:17 +0100)]
x86: Use LOCK ADD instead of MFENCE for smp_mb()
MFENCE is overly heavyweight for SMP semantics on WB memory, because it also
orders weaker cached writes, and flushes the WC buffers.
This technique was used as an optimisation in Java[1], and later adopted by
Linux[2] where it was measured to have a 60% performance improvement in VirtIO
benchmarks.
The stack is used because it is hot in the L1 cache, and a -4 offset is used
to avoid creating a false data dependency on live data.
For 64bit userspace, the Red Zone needs to be considered. Use -32 to allow
for a reasonable quantity of Red Zone data, but still have a 50% chance of
hitting the same cache line as %rsp.
Fix up the 32 bit definitions in HVMLoader and libxc to avoid a false data
dependency.
Paul Durrant [Tue, 15 Sep 2020 14:10:07 +0000 (15:10 +0100)]
xl: implement documented '--force' option for block-detach
The manpage for 'xl' documents an option to force a block device to be
released even if the domain to which it is attached does not co-operate.
The documentation also states that, if the force flag is not specified, the
block-detach operation should fail.
Currently the force option is not implemented and a non-forced block-detach
will auto-force after a time-out of 10s. This patch implements the force
option and also stops auto-forcing a non-forced block-detach by calling
libxl_device_disk_safe_remove() rather than libxl_device_disk_remove(),
allowing the operation to fail cleanly as per the documented behaviour.
NOTE: The documentation is also adjusted since the normal positioning of
options is before compulsory parameters. It is also noted that use of
the --force option may lead to a guest crash.
Signed-off-by: Paul Durrant <pdurrant@amazon.com> Acked-by: Wei Liu <wl@xen.org>