Juergen Gross [Wed, 4 Nov 2020 08:26:42 +0000 (09:26 +0100)]
xen/spinlocks: spin_trylock with interrupts off is always fine
Even if a spinlock was taken with interrupts on before calling
spin_trylock() with interrupts off is fine, as it can't block.
Add a bool parameter "try" to check_lock() for handling this case.
Remove the call of check_lock() from _spin_is_locked(), as it really
serves no purpose and it can even lead to false crashes, e.g. when
a lock was taken correctly with interrupts enabled and the call of
_spin_is_locked() happened with interrupts off. In case the lock is
taken with wrong interrupt flags this will be catched when taking
the lock.
Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Ian Jackson [Wed, 19 Aug 2020 17:31:45 +0000 (18:31 +0100)]
SUPPORT.md: Desupport qemu trad except stub dm
While investigating XSA-335 we discovered that many upstream security
fixes were missing. It is not practical to backport them. There is
no good reason to be running this very ancient version of qemu, except
that it is the only way to run a stub dm which is currently supported
by upstream.
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
The BAD_MADT_ENTRY() macro is designed to work for all of the subtables
of the MADT. In the ACPI 5.1 version of the spec, the struct for the
GICC subtable (struct acpi_madt_generic_interrupt) is 76 bytes long; in
ACPI 6.0, the struct is 80 bytes long. But, there is only one definition
in ACPICA for this struct -- and that is the 6.0 version. Hence, when
BAD_MADT_ENTRY() compares the struct size to the length in the GICC
subtable, it fails if 5.1 structs are in use, and there are systems in
the wild that have them.
This patch adds the BAD_MADT_GICC_ENTRY() that checks the GICC subtable
only, accounting for the difference in specification versions that are
possible. The BAD_MADT_ENTRY() will continue to work as is for all other
MADT subtables.
This code is being added to an arm64 header file since that is currently
the only architecture using the GICC subtable of the MADT. As a GIC is
specific to ARM, it is also unlikely the subtable will be used elsewhere.
Fixes: aeb823bbacc2 ("ACPICA: ACPI 6.0: Add changes for FADT table.") Signed-off-by: Al Stone <al.stone@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: "Rafael J. Wysocki" <rjw@rjwysocki.net>
[catalin.marinas@arm.com: extra brackets around macro arguments] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Julien Grall <julien.grall@arm.com> Signed-off-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Julien Grall <jgrall@amazon.com> Acked-by: Stefano Stabellini <sstabellini@kernel.org> Tested-by: Elliott Mitchell <ehem+xen@m5p.com>
xen/arm: Check if the platform is not using ACPI before initializing Dom0less
Dom0less requires a device-tree. However, since commit 6e3e77120378
"xen/arm: setup: Relocate the Device-Tree later on in the boot", the
device-tree will not get unflatten when using ACPI.
This will lead to a crash during boot.
Given the complexity to setup dom0less with ACPI (for instance how to
assign device?), we should skip any code related to Dom0less when using
ACPI.
xen/arm: acpi: The fixmap area should always be cleared during failure/unmap
Commit 022387ee1ad3 "xen/arm: mm: Don't open-code Xen PT update in
{set, clear}_fixmap()" enforced that each set_fixmap() should be
paired with a clear_fixmap(). Any failure to follow the model would
result to a platform crash.
Unfortunately, the use of fixmap in the ACPI code was overlooked as it
is calling set_fixmap() but not clear_fixmap().
The function __acpi_os_map_table() is reworked so:
- We know before the mapping whether the fixmap region is big
enough for the mapping.
- It will fail if the fixmap is already in use. This is not a
change of behavior but clarifying the current expectation to avoid
hitting a BUG().
The function __acpi_os_unmap_table() will now call clear_fixmap().
xen/acpi: Rework acpi_os_map_memory() and acpi_os_unmap_memory()
The functions acpi_os_{un,}map_memory() are meant to be arch-agnostic
while the __acpi_os_{un,}map_memory() are meant to be arch-specific.
Currently, the former are still containing x86 specific code.
To avoid this rather strange split, the generic helpers are reworked so
they are arch-agnostic. This requires the introduction of a new helper
__acpi_os_unmap_memory() that will undo any mapping done by
__acpi_os_map_memory().
Currently, the arch-helper for unmap is basically a no-op so it only
returns whether the mapping was arch specific. But this will change
in the future.
Note that the x86 version of acpi_os_map_memory() was already able to
able the 1MB region. Hence why there is no addition of new code.
Jan Beulich [Fri, 30 Oct 2020 13:30:35 +0000 (14:30 +0100)]
x86: fix build of PV shim when !GRANT_TABLE
To do its compat translation, shim code needs to include the compat
header. For this to work, this header first of all needs to be
generated.
Reported-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Roger Pau Monné [Fri, 30 Oct 2020 13:28:03 +0000 (14:28 +0100)]
x86/hvm: process softirq while saving/loading entries
On slow systems with sync_console saving or loading the context of big
guests can cause the watchdog to trigger. Fix this by adding a couple
of process_pending_softirqs.
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com>
Jan Beulich [Fri, 30 Oct 2020 13:27:23 +0000 (14:27 +0100)]
x86/shadow: correct GFN use by sh_unshadow_for_p2m_change()
Luckily sh_remove_all_mappings()'s use of the parameter is limited to
generation of log messages. Nevertheless we'd better pass correct GFNs
around:
- the incoming GFN, when replacing a large page, may not be large page
aligned,
- incrementing by page-size-scaled values can't be right.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Tim Deegan <tim@xen.org>
Jan Beulich [Fri, 30 Oct 2020 13:26:46 +0000 (14:26 +0100)]
x86/shadow: sh_{make,destroy}_monitor_table() are "even more" HVM-only
With them depending on just the number of shadow levels, there's no need
for more than one instance of them, and hence no need for any hook (IOW 452219e24648 ["x86/shadow: monitor table is HVM-only"] didn't go quite
far enough). Move the functions to hvm.c while dropping the dead
is_pv_32bit_domain() code paths.
While moving the code, replace a stale comment reference to
sh_install_xen_entries_in_l4(). Doing so made me notice the function
also didn't have its prototype dropped in 8d7b633adab7 ("x86/mm:
Consolidate all Xen L4 slot writing into init_xen_l4_slots()"), which
gets done here as well.
Also make their first parameters const.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Tim Deegan <tim@xen.org>
Bertrand Marquis [Mon, 26 Oct 2020 16:21:33 +0000 (16:21 +0000)]
xen/arm: Warn user on cpu errata 832075
When a Cortex A57 processor is affected by CPU errata 832075, a guest
not implementing the workaround for it could deadlock the system.
Add a warning during boot informing the user that only trusted guests
should be executed on the system.
An equivalent warning is already given to the user by KVM on cores
affected by this errata.
Also taint the hypervisor as unsecure when this errata applies and
mention Cortex A57 r0p0 - r1p2 as not security supported in SUPPORT.md
Andrew Cooper [Thu, 29 Oct 2020 12:03:43 +0000 (12:03 +0000)]
x86/pv: Drop stale comment in dom0_construct_pv()
This comment was introduced by c/s 22a857bde9b8 in 2003, and became stale with
c/s 99db02d50976 also in 2003. Both of these predate the introduction of
struct vcpu, when the processor field moved object.
17 years is long enough for this comment to be mis-informing people reading
the code.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com>
Andrew Cooper [Mon, 19 Oct 2020 14:51:22 +0000 (15:51 +0100)]
x86/pv: Flush TLB in response to paging structure changes
With MMU_UPDATE, a PV guest can make changes to higher level pagetables. This
is safe from Xen's point of view (as the update only affects guest mappings),
and the guest is required to flush (if necessary) after making updates.
However, Xen's use of linear pagetables (UPDATE_VA_MAPPING, GNTTABOP_map,
writeable pagetables, etc.) is an implementation detail outside of the
API/ABI.
Changes in the paging structure require invalidations in the linear pagetable
range for subsequent accesses into the linear pagetables to access non-stale
mappings. Xen must provide suitable flushing to prevent intermixed guest
actions from accidentally accessing/modifying the wrong pagetable.
For all L2 and higher modifications, flush the TLB. PV guests cannot create
L2 or higher entries with the Global bit set, so no mappings established in
the linear range can be global. (This could in principle be an order 39 flush
starting at LINEAR_PT_VIRT_START, but no such mechanism exists in practice.)
Express the necessary flushes as a set of booleans which accumulate across the
operation. Comment the flushing logic extensively.
This is XSA-286.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Andrew Cooper [Thu, 22 Oct 2020 10:28:58 +0000 (11:28 +0100)]
x86/pv: Drop FLUSH_TLB_GLOBAL in do_mmu_update() for XPTI
c/s 9d1d31ad9498 "x86: slightly reduce Meltdown band-aid overhead" removed the
use of Global TLB flushes on the Xen entry path, but added a FLUSH_TLB_GLOBAL
to the L4 path in do_mmu_update().
However, this was unnecessary.
It is the guests responsibility to perform appropriate TLB flushing if the L4
modification altered an established mapping in a flush-relevant way. In this
case, an MMUEXT_OP hypercall will follow. The case which Xen needs to cover
is when new mappings are created, and the resync on the exit-to-guest path
covers this correctly.
There is a corner case with multiple vCPUs in hypercalls at the same time,
which 9d1d31ad9498 changed, and this patch changes back to its original XPTI
behaviour.
Architecturally, established TLB entries can continue to be used until the
broadcast flush has completed. Therefore, even with concurrent hypercalls,
the guest cannot depend on older mappings not being used until an MMUEXT_OP
hypercall completes. Xen's implementation of guest-initiated flushes will
take correct effect on top of an in-progress hypercall, picking up new mapping
setting before the other vCPU's MMUEXT_OP completes.
Note: The correctness of this change is not impacted by whether XPTI uses
global mappings or not. Correctness there depends on the behaviour of Xen on
the entry/exit paths when switching two/from the XPTI "shadow" pagetables.
This is (not really) XSA-286 (but necessary to simplify the logic).
Fixes: 9d1d31ad9498 ("x86: slightly reduce Meltdown band-aid overhead") Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Jan Beulich [Mon, 26 Oct 2020 13:38:35 +0000 (14:38 +0100)]
AMD/IOMMU: correct shattering of super pages
Fill the new page table _before_ installing into a live page table
hierarchy, as installing a blank page first risks I/O faults on
sub-ranges of the original super page which aren't part of the range
for which mappings are being updated.
While at it also do away with mapping and unmapping the same fresh
intermediate page table page once per entry to be written.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Paul Durrant <paul@xen.org>
Jan Beulich [Fri, 23 Oct 2020 16:03:18 +0000 (18:03 +0200)]
x86emul: fix PINSRW and adjust other {,V}PINSR*
The use of simd_packed_int together with no further update to op_bytes
has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
aligned memory operand. Use simd_none instead and override it after
general decoding with simd_other, like is done for the B/D/Q siblings.
While benign, for consistency also use DstImplicit instead of DstReg
in x86_decode_twobyte().
PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
gets dropped.
For further consistency also
- use src.bytes instead of op_bytes in relevant memcpy() invocations,
- avoid the pointless updating of op_bytes (all we care about later is
that the value be less than 16).
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Juergen Gross [Mon, 19 Oct 2020 15:27:54 +0000 (17:27 +0200)]
tools/libs: move official headers to common directory
Instead of each library having an own include directory move the
official headers to tools/include instead. This will drop the need to
link those headers to tools/include and there is no need any longer
to have library-specific include paths when building Xen.
While at it remove setting of the unused variable
PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
Signed-off-by: Juergen Gross <jgross@suse.com> Acked-by: Christian Lindig <christian.lindig@citrix.com> Tested-by: Bertrand Marquis <bertrand.marquis@arm.com> Acked-by: Ian Jackson <iwj@xenproject.org>
Juergen Gross [Fri, 23 Oct 2020 13:53:10 +0000 (15:53 +0200)]
tools/init-xenstore-domain: support xenstore pvh stubdom
Instead of creating the xenstore-stubdom domain first and parsing the
kernel later do it the other way round. This enables to probe for the
domain type supported by the xenstore-stubdom and to support both, pv
and pvh type stubdoms.
Try to parse the stubdom image first for PV support, if this fails use
HVM. Then create the domain with the appropriate type selected.
Signed-off-by: Juergen Gross <jgross@suse.com> Acked-by: Wei Liu <wl@xen.org>
Jan Beulich [Fri, 23 Oct 2020 08:13:53 +0000 (10:13 +0200)]
x86emul: increase FPU save area in test harness/fuzzer
Running them on a system (or emulator) with AMX support requires this
to be quite a bit larger than 8k, to avoid triggering the assert() in
emul_test_init(). Bump all the way up to 16k right away.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Roger Pau Monné [Fri, 23 Oct 2020 08:13:14 +0000 (10:13 +0200)]
pci: cleanup MSI interrupts before removing device from IOMMU
Doing the MSI cleanup after removing the device from the IOMMU leads
to the following panic on AMD hardware:
Assertion 'table.ptr && (index < intremap_table_entries(table.ptr, iommu))' failed at iommu_intr.c:172
----[ Xen-4.13.1-10.0.3-d x86_64 debug=y Not tainted ]----
CPU: 3
RIP: e008:[<ffff82d08026ae3c>] drivers/passthrough/amd/iommu_intr.c#get_intremap_entry+0x52/0x7b
[...]
Xen call trace:
[<ffff82d08026ae3c>] R drivers/passthrough/amd/iommu_intr.c#get_intremap_entry+0x52/0x7b
[<ffff82d08026af25>] F drivers/passthrough/amd/iommu_intr.c#update_intremap_entry_from_msi_msg+0xc0/0x342
[<ffff82d08026ba65>] F amd_iommu_msi_msg_update_ire+0x98/0x129
[<ffff82d08025dd36>] F iommu_update_ire_from_msi+0x1e/0x21
[<ffff82d080286862>] F msi_free_irq+0x55/0x1a0
[<ffff82d080286f25>] F pci_cleanup_msi+0x8c/0xb0
[<ffff82d08025cf52>] F pci_remove_device+0x1af/0x2da
[<ffff82d0802a42d1>] F do_physdev_op+0xd18/0x1187
[<ffff82d080383925>] F pv_hypercall+0x1f5/0x567
[<ffff82d08038a432>] F lstar_enter+0x112/0x120
That's because the call to iommu_remove_device on AMD hardware will
remove the per-device interrupt remapping table, and hence the call to
pci_cleanup_msi done afterwards will find a null intremap table and
crash.
Reorder the calls so that MSI interrupts are torn down before removing
the device from the IOMMU.
Fixes: d7cfeb7c13ed ("AMD/IOMMU: don't blindly allocate interrupt remapping tables") Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Jan Beulich [Fri, 23 Oct 2020 08:12:31 +0000 (10:12 +0200)]
evtchn: let evtchn_set_priority() acquire the per-channel lock
Some lock wants to be held to make sure the port doesn't change state,
but there's no point holding the per-domain event lock here. Switch to
using the finer grained per-channel lock instead (albeit as a downside
for the time being this requires disabling interrupts for a short
period of time).
FAOD this doesn't guarantee anything towards in particular
evtchn_fifo_set_pending(), as for interdomain channels that function
would be called with the remote side's per-channel lock held.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Jan Beulich [Fri, 23 Oct 2020 08:11:46 +0000 (10:11 +0200)]
evtchn: rename and adjust guest_enabled_event()
The function isn't about an "event" in general, but about a vIRQ. The
function also failed to honor global vIRQ-s, instead assuming the caller
would pass vCPU 0 in such a case.
While at it also adjust the
- types the function uses,
- single user to make use of domain_vcpu().
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Jan Beulich [Fri, 23 Oct 2020 08:09:55 +0000 (10:09 +0200)]
evtchn: replace FIFO-specific header by generic private one
Having a FIFO specific header is not (or at least no longer) warranted
with just three function declarations left there. Introduce a private
header instead, moving there some further items from xen/event.h.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Jan Beulich [Fri, 23 Oct 2020 08:07:56 +0000 (10:07 +0200)]
evtchn: avoid race in get_xen_consumer()
There's no global lock around the updating of this global piece of data.
Make use of cmpxchgptr() to avoid two entities racing with their
updates.
While touching the functionality, mark xen_consumers[] read-mostly (or
else the if() condition could use the result of cmpxchgptr(), writing to
the slot unconditionally).
The use of cmpxchgptr() here points out (by way of clang warning about
it) that its original use of const was slightly wrong. Adjust the
placement, or else undefined behavior of const qualifying a function
type will result.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Jan Beulich [Fri, 23 Oct 2020 08:06:53 +0000 (10:06 +0200)]
IOMMU/EPT: avoid double flushing in shared page table case
While the flush coalescing optimization has been helping the non-shared
case, it has actually lead to double flushes in the shared case (which
ought to be the more common one nowadays at least): Once from
*_set_entry() and a second time up the call tree from wherever the
overriding flag gets played with. In alignment with XSA-346 suppress
flushing in this case.
Similarly avoid excessive setting of IOMMU_FLUSHF_added on the batched
flushes: "idx" hasn't been added a new mapping for.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Jan Beulich [Fri, 23 Oct 2020 08:06:20 +0000 (10:06 +0200)]
x86/mm: avoid playing with directmap when self-snoop can be relied upon
The set of systems affected by XSA-345 would have been smaller is we had
this in place already: When the processor is capable of dealing with
mismatched cacheability, there's no extra work we need to carry out.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Jan Beulich [Fri, 23 Oct 2020 08:05:29 +0000 (10:05 +0200)]
x86: XENMAPSPACE_gmfn{,_batch,_range} want to special case idx == gpfn
In this case up to now we've been freeing the page (through
guest_remove_page(), with the actual free typically happening at the
put_page() later in the function), but then failing the call on the
subsequent GFN consistency check. However, in my opinion such a request
should complete as an "expensive" no-op (leaving aside the potential
unsharing of the page).
This points out that f33d653f46f5 ("x86: replace bad ASSERT() in
xenmem_add_to_physmap_one()" would really have needed an XSA, despite
its description claiming otherwise, as in release builds we then put in
place a P2M entry referencing the about to be freed page.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Bertrand Marquis [Fri, 16 Oct 2020 13:58:47 +0000 (14:58 +0100)]
xen/arm: Print message if reset did not work
If for some reason the hardware reset is not working, print a message to
the user every 5 seconds to warn him that the system did not reset
properly and Xen is still looping.
The message is printed infinitely so that someone connecting to a serial
console with no history would see the message coming after 5 seconds.
arm: optee: don't print warning about "wrong" RPC buffer
The OP-TEE mediator tracks the cookie value of the last buffer which was
requested by OP-TEE. This tracked value serves one important purpose: if
OP-TEE wants to request another buffer, we know that it finished
importing the previous one and we can free the page list associated with
it.
Also, we had a false premise that OP_TEE frees requested buffers in
reversed order. So we checked rpc_data_cookie during handling of the
OPTEE_RPC_CMD_SHM_FREE call and printed a warning if the cookie of the
buffer which is requested to be freed differs from the last allocated
one.
During testing of RPMB FS services I discovered that RPMB code frees
request and response buffers in the same order is it allocated them. And
this is perfectly fine, actually.
So, we are removing mentioned warning message in Xen, as it is perfectly
normal to free buffers in arbitrary order.
Jan Beulich [Tue, 20 Oct 2020 12:23:12 +0000 (14:23 +0200)]
AMD/IOMMU: ensure suitable ordering of DTE modifications
DMA and interrupt translation should be enabled only after other
applicable DTE fields have been written. Similarly when disabling
translation or when moving a device between domains, translation should
first be disabled, before other entry fields get modified. Note however
that the "moving" aspect doesn't apply to the interrupt remapping side,
as domain specifics are maintained in the IRTEs here, not the DTE. We
also never disable interrupt remapping once it got enabled for a device
(the respective argument passed is always the immutable iommu_intremap).
This is part of XSA-347.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Paul Durrant <paul@xen.org>
Jan Beulich [Tue, 20 Oct 2020 12:22:52 +0000 (14:22 +0200)]
AMD/IOMMU: update live PTEs atomically
Updating a live PTE bitfield by bitfield risks the compiler re-ordering
the individual updates as well as splitting individual updates into
multiple memory writes. Construct the new entry fully in a local
variable, do the check to determine the flushing needs on the thus
established new entry, and then write the new entry by a single insn.
Similarly using memset() to clear a PTE is unsafe, as the order of
writes the function does is, at least in principle, undefined.
This is part of XSA-347.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Paul Durrant <paul@xen.org>
Jan Beulich [Tue, 20 Oct 2020 12:22:26 +0000 (14:22 +0200)]
AMD/IOMMU: convert amd_iommu_pte from struct to union
This is to add a "raw" counterpart to the bitfield equivalent. Take the
opportunity and
- convert fields to bool / unsigned int,
- drop the naming of the reserved field,
- shorten the names of the ignored ones.
This is part of XSA-347.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Paul Durrant <paul@xen.org>
Jan Beulich [Tue, 20 Oct 2020 12:21:32 +0000 (14:21 +0200)]
IOMMU: hold page ref until after deferred TLB flush
When moving around a page via XENMAPSPACE_gmfn_range, deferring the TLB
flush for the "from" GFN range requires that the page remains allocated
to the guest until the TLB flush has actually occurred. Otherwise a
parallel hypercall to remove the page would only flush the TLB for the
GFN it has been moved to, but not the one is was mapped at originally.
This is part of XSA-346.
Fixes: cf95b2a9fd5a ("iommu: Introduce per cpu flag (iommu_dont_flush_iotlb) to avoid unnecessary iotlb... ") Reported-by: Julien Grall <jgrall@amazon.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Julien Grall <jgrall@amazon.com>
Jan Beulich [Tue, 20 Oct 2020 12:21:09 +0000 (14:21 +0200)]
IOMMU: suppress "iommu_dont_flush_iotlb" when about to free a page
Deferring flushes to a single, wide range one - as is done when
handling XENMAPSPACE_gmfn_range - is okay only as long as
pages don't get freed ahead of the eventual flush. While the only
function setting the flag (xenmem_add_to_physmap()) suggests by its name
that it's only mapping new entries, in reality the way
xenmem_add_to_physmap_one() works means an unmap would happen not only
for the page being moved (but not freed) but, if the destination GFN is
populated, also for the page being displaced from that GFN. Collapsing
the two flushes for this GFN into just one (end even more so deferring
it to a batched invocation) is not correct.
This is part of XSA-346.
Fixes: cf95b2a9fd5a ("iommu: Introduce per cpu flag (iommu_dont_flush_iotlb) to avoid unnecessary iotlb... ") Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Paul Durrant <paul@xen.org> Acked-by: Julien Grall <jgrall@amazon.com>
Hongyan Xia [Sat, 11 Jan 2020 21:57:43 +0000 (21:57 +0000)]
x86/mm: Prevent some races in hypervisor mapping updates
map_pages_to_xen will attempt to coalesce mappings into 2MiB and 1GiB
superpages if possible, to maximize TLB efficiency. This means both
replacing superpage entries with smaller entries, and replacing
smaller entries with superpages.
Unfortunately, while some potential races are handled correctly,
others are not. These include:
1. When one processor modifies a sub-superpage mapping while another
processor replaces the entire range with a superpage.
Take the following example:
Suppose L3[N] points to L2. And suppose we have two processors, A and
B.
* A walks the pagetables, get a pointer to L2.
* B replaces L3[N] with a 1GiB mapping.
* B Frees L2
* A writes L2[M] #
This is race exacerbated by the fact that virt_to_xen_l[21]e doesn't
handle higher-level superpages properly: If you call virt_xen_to_l2e
on a virtual address within an L3 superpage, you'll either hit a BUG()
(most likely), or get a pointer into the middle of a data page; same
with virt_xen_to_l1 on a virtual address within either an L3 or L2
superpage.
So take the following example:
* A reads pl3e and discovers it to point to an L2.
* B replaces L3[N] with a 1GiB mapping
* A calls virt_to_xen_l2e() and hits the BUG_ON() #
2. When two processors simultaneously try to replace a sub-superpage
mapping with a superpage mapping.
Take the following example:
Suppose L3[N] points to L2. And suppose we have two processors, A and B,
both trying to replace L3[N] with a superpage.
* A walks the pagetables, get a pointer to pl3e, and takes a copy ol3e pointing to L2.
* B walks the pagetables, gets a pointre to pl3e, and takes a copy ol3e pointing to L2.
* A writes the new value into L3[N]
* B writes the new value into L3[N]
* A recursively frees all the L1's under L2, then frees L2
* B recursively double-frees all the L1's under L2, then double-frees L2 #
Fix this by grabbing a lock for the entirety of the mapping update
operation.
Rather than grabbing map_pgdir_lock for the entire operation, however,
repurpose the PGT_locked bit from L3's page->type_info as a lock.
This means that rather than locking the entire address space, we
"only" lock a single 512GiB chunk of hypervisor address space at a
time.
There was a proposal for a lock-and-reverify approach, where we walk
the pagetables to the point where we decide what to do; then grab the
map_pgdir_lock, re-verify the information we collected without the
lock, and finally make the change (starting over again if anything had
changed). Without being able to guarantee that the L2 table wasn't
freed, however, that means every read would need to be considered
potentially unsafe. Thinking carefully about that is probably
something that wants to be done on public, not under time pressure.
This is part of XSA-345.
Reported-by: Hongyan Xia <hongyxia@amazon.com> Signed-off-by: Hongyan Xia <hongyxia@amazon.com> Signed-off-by: George Dunlap <george.dunlap@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Jan Beulich [Tue, 20 Oct 2020 06:54:59 +0000 (08:54 +0200)]
SVM: avoid VMSAVE in ctxt-switch-to
Of the state saved by the insn and reloaded by the corresponding VMLOAD
- TR and syscall state are invariant while having Xen's state loaded,
- sysenter is unused altogether by Xen,
- FS, GS, and LDTR are not used by Xen and get suitably set in PV
context switch code.
Note that state is suitably populated in _svm_cpu_up(); a minimal
respective assertion gets added.
Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Igor Druzhinin [Tue, 20 Oct 2020 06:54:23 +0000 (08:54 +0200)]
hvmloader: flip "ACPI data" to "ACPI NVS" type for ACPI table region
ACPI specification contains statements describing memory marked with regular
"ACPI data" type as reclaimable by the guest. Although the guest shouldn't
really do it if it wants kexec or similar functionality to work, there
could still be ambiguities in treating these regions as potentially regular
RAM.
One such example is SeaBIOS which currently reports "ACPI data" regions as
RAM to the guest in its e801 call. Which it might have the right to do as any
user of this is expected to be ACPI unaware. But a QEMU bootloader later seems
to ignore that fact and is instead using e801 to find a place for initrd which
causes the tables to be erased. While arguably QEMU bootloader or SeaBIOS need
to be fixed / improved here, that is just one example of the potential problems
from using a reclaimable memory type.
Flip the type to "ACPI NVS" which doesn't have this ambiguity in it and is
described by the spec as non-reclaimable (so cannot ever be treated like RAM).
Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com>
Jan Beulich [Tue, 20 Oct 2020 06:53:53 +0000 (08:53 +0200)]
xen-detect: make CPUID fallback CPUID-faulting aware
Relying on presence / absence of hypervisor leaves in raw / escaped
CPUID output cannot be used to tell apart PV and HVM on CPUID faulting
capable hardware. Utilize a PV-only feature flag to avoid false positive
HVM detection.
While at it also short circuit the main detection loop: For PV, only
the base group of leaves can possibly hold hypervisor information.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Wei Liu <wl@xen.org>
Jan Beulich [Tue, 20 Oct 2020 06:52:53 +0000 (08:52 +0200)]
EFI: free unused boot mem in at least some cases
Address at least the primary reason why 52bba67f8b87 ("efi/boot: Don't
free ebmalloc area at all") was put in place: Make xen_in_range() aware
of the freed range. This is in particular relevant for EFI-enabled
builds not actually running on EFI, as the entire range will be unused
in this case.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Bertrand Marquis [Thu, 15 Oct 2020 09:16:09 +0000 (10:16 +0100)]
tools/xenpmd: Fix gcc10 snprintf warning
Add a check for snprintf return code and ignore the entry if we get an
error. This should in fact never happen and is more a trick to make gcc
happy and prevent compilation errors.
This is solving the following gcc warning when compiling for arm32 host
platforms with optimization activated:
xenpmd.c:92:37: error: '%s' directive output may be truncated writing
between 4 and 2147483645 bytes into a region of size 271
[-Werror=format-truncation=]
This is also solving the following Debian bug:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=970802
Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com> Acked-by: Wei Liu <wl@xen.org>
Elliott Mitchell [Mon, 12 Oct 2020 01:11:39 +0000 (18:11 -0700)]
tools/python: Pass linker to Python build process
Unexpectedly the environment variable which needs to be passed is
$LDSHARED and not $LD. Otherwise Python may find the build `ld` instead
of the host `ld`.
Replace $(LDFLAGS) with $(SHLIB_LDFLAGS) as Python needs shared objects
it can load at runtime, not executables.
This uses $(CC) instead of $(LD) since Python distutils appends $CFLAGS
to $LDFLAGS which breaks many linkers.
Signed-off-by: Elliott Mitchell <ehem+xen@m5p.com> Acked-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
tools/libs/stat: use memcpy instead of strncpy in getBridge
Use memcpy in getBridge to prevent gcc warnings about truncated
strings. We know that we might truncate it, so the gcc warning
here is wrong.
Revert previous change changing buffer sizes as bigger buffers
are not needed.
Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com> Acked-by: Wei Liu <wl@xen.org>
Andrew Cooper [Mon, 5 Oct 2020 11:46:30 +0000 (12:46 +0100)]
x86/smpboot: Don't unconditionally call memguard_guard_stack() in cpu_smpboot_alloc()
cpu_smpboot_alloc() is designed to be idempotent with respect to partially
initialised state. This occurs for S3 and CPU parking, where enough state to
handle NMIs/#MCs needs to remain valid for the entire lifetime of Xen, even
when we otherwise want to offline the CPU.
For simplicity between various configuration, Xen always uses shadow stack
mappings (Read-only + Dirty) for the guard page, irrespective of whether
CET-SS is enabled.
Unfortunately, the CET-SS changes in memguard_guard_stack() broke idempotency
by first writing out the supervisor shadow stack tokens with plain writes,
then changing the mapping to being read-only.
This ordering is strictly necessary to configure the BSP, which cannot have
the supervisor tokens be written with WRSS.
Instead of calling memguard_guard_stack() unconditionally, call it only when
actually allocating a new stack. Xenheap allocates are guaranteed to be
writeable, and the net result is idempotency WRT configuring stack_base[].
Fixes: 91d26ed304f ("x86/shstk: Create shadow stacks") Reported-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Andrew Cooper [Mon, 12 Oct 2020 13:58:45 +0000 (14:58 +0100)]
x86/ucode/intel: Improve description for gathering the microcode revision
Obtaining the microcode revision on Intel CPUs is complicated for backwards
compatibility reasons. Update apply_microcode() to use a slightly more
efficient CPUID invocation, now that the documentation has been updated to
confirm that any CPUID instruction is fine, not just CPUID.1
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Andrew Cooper [Mon, 12 Oct 2020 12:24:31 +0000 (13:24 +0100)]
x86/traps: 'Fix' safety of read_registers() in #DF path
All interrupts and exceptions pass a struct cpu_user_regs up into C. This
contains the legacy vm86 fields from 32bit days, which are beyond the
hardware-pushed frame.
Accessing these fields is generally illegal, as they are logically out of
bounds for anything other than an interrupt/exception hitting ring1/3 code.
show_registers() unconditionally reads these fields, but the content is
discarded before use. This is benign right now, as all parts of the stack are
readable, including the guard pages.
However, read_registers() in the #DF handler writes to these fields as part of
preparing the state dump, and being IST, hits the adjacent stack frame.
This has been broken forever, but c/s 6001660473 "x86/shstk: Rework the stack
layout to support shadow stacks" repositioned the #DF stack to be adjacent to
the guard page, which turns this OoB write into a fatal pagefault:
(XEN) *** DOUBLE FAULT ***
(XEN) ----[ Xen-4.15-unstable x86_64 debug=y Tainted: C ]----
(XEN) ----[ Xen-4.15-unstable x86_64 debug=y Tainted: C ]----
(XEN) CPU: 4
(XEN) RIP: e008:[<ffff82d04031fd4f>] traps.c#read_registers+0x29/0xc1
(XEN) RFLAGS: 0000000000050086 CONTEXT: hypervisor (d1v0)
...
(XEN) Xen call trace:
(XEN) [<ffff82d04031fd4f>] R traps.c#read_registers+0x29/0xc1
(XEN) [<ffff82d0403207b3>] F do_double_fault+0x3d/0x7e
(XEN) [<ffff82d04039acd7>] F double_fault+0x107/0x110
(XEN)
(XEN) Pagetable walk from ffff830236f6d008:
(XEN) L4[0x106] = 80000000bfa9b063ffffffffffffffff
(XEN) L3[0x008] = 0000000236ffd063ffffffffffffffff
(XEN) L2[0x1b7] = 0000000236ffc063ffffffffffffffff
(XEN) L1[0x16d] = 8000000236f6d161ffffffffffffffff
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 4:
(XEN) FATAL PAGE FAULT
(XEN) [error_code=0003]
(XEN) Faulting linear address: ffff830236f6d008
(XEN) ****************************************
(XEN)
and rendering the main #DF analysis broken.
The proper fix is to delete cpu_user_regs.es and later, so no
interrupt/exception path can access OoB, but this needs disentangling from the
PV ABI first.
Not-really-fixes: 6001660473 ("x86/shstk: Rework the stack layout to support shadow stacks") Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Jan Beulich [Thu, 15 Oct 2020 15:18:25 +0000 (17:18 +0200)]
tools/gdbsx: drop stray recursion into tools/include/
Doing so isn't appropriate here - this gets done very early in the build
process. If the directory is mean to to be buildable on its own,
different arrangements would be needed.
The issue has become more pronounced by 47654a0d7320 ("tools/include:
fix (drop) dependencies of when to populate xen/"), but was there before
afaict.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>
Jan Beulich [Thu, 15 Oct 2020 10:30:01 +0000 (12:30 +0200)]
EFI: further "need_to_free" adjustments
When processing "chain" directives, the previously loaded config file
gets freed. This needs to be recorded accordingly such that no error
path would try to free the same block of memory a 2nd time.
Furthermore, neither .addr nor .size being zero has any meaning towards
the need to free an allocated chunk anymore. Drop (from read_file()) and
replace (in Arm's efi_arch_use_config_file(), to sensibly retain the
comment) respective assignments.
Chen Yu [Thu, 15 Oct 2020 10:29:11 +0000 (12:29 +0200)]
x86/mwait-idle: customize IceLake server support
On ICX platform, the C1E auto-promotion is enabled by default.
As a result, the CPU might fall into C1E more offen than previous
platforms. So disable C1E auto-promotion and expose C1E as a separate
idle state.
Beside C1 and C1E, the exit latency of C6 was measured
by a dedicated tool. However the exit latency(41us) exposed
by _CST is much smaller than the one we measured(128us). This
is probably due to the _CST uses the exit latency when woken
up from PC0+C6, rather than PC6+C6 when C6 was measured. Choose
the latter as we need the longest latency in theory.
Signed-off-by: Chen Yu <yu.c.chen@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
[Linux commit a472ad2bcea479ba068880125d7273fc95c14b70] Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com>
Michal Orzel [Wed, 14 Oct 2020 10:05:41 +0000 (12:05 +0200)]
xen/arm: Document the erratum #853709 related to Cortex A72
The Cortex-A72 erratum #853709 is the same as the Cortex-A57
erratum #852523. As the latter is already workaround, we only
need to update the documentation.
Signed-off-by: Michal Orzel <michal.orzel@arm.com>
[julieng: Reworded the commit message] Reviewed-by: Julien Grall <jgrall@amazon.com> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
Jan Beulich [Wed, 14 Oct 2020 12:13:16 +0000 (14:13 +0200)]
EFI/Arm64: don't clobber DTB pointer
read_section() needs to be more careful: efi_arch_use_config_file()
may have found a DTB file (but without modules), and there may be no DTB
specified in the EFI config file. In this case the pointer to the blob
must not be overwritten with NULL when no ".dtb" section is present
either.
Fixes: 8a71d50ed40b ("efi: Enable booting unified hypervisor/kernel/initrd images") Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Jan Beulich [Wed, 14 Oct 2020 12:11:49 +0000 (14:11 +0200)]
kexec: some #include adjustments
In the context of working on x86's elf_core_save_regs() I noticed there
were far more source files getting rebuilt than I would have expected.
While the main offender looks to have been fixmap.h including kexec.h,
also drop use of elfcore.h from kexec.h.
While adjusting machine_kexec.c also replace use of guest_access.h by
domain_page.h.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Note that this is done on the generic MSR handler, and PV guest will
also get 0 back when trying to read the MSR. There doesn't seem to be
much value in handling the MSR for HVM guests only.
Fixes: 84e848fd7a1 ('x86/hvm: disallow access to unknown MSRs') Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com>
Jan Beulich [Wed, 14 Oct 2020 12:05:10 +0000 (14:05 +0200)]
x86/vLAPIC: vlapic_init() runs only once for a vCPU
Hence there's no need to guard allocation / mapping by checks whether
the same action has been done before. I assume this was a transient
change which should have been undone before 509529e99148 ("x86 hvm: Xen
interface and implementation for virtual S3") got committed.
While touching this code, drop the pretty useless dprintk()-s.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Jan Beulich [Wed, 14 Oct 2020 12:03:38 +0000 (14:03 +0200)]
x86: fix resource leaks on arch_vcpu_create() error path
{hvm,pv}_vcpu_initialise() have always kind of been meant to be the
final possible source of errors in arch_vcpu_create(), hence not
requiring any unrolling of what they've done on the error path. (Of
course this may change once the various involved paths all have become
idempotent.)
But even beyond this aspect I think it is more logical to do policy
initialization ahead of the calling of these two functions, as they may
in principle want to access it.
Fixes: 4187f79dc718 ("x86/msr: introduce struct msr_vcpu_policy") Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Jan Beulich [Wed, 14 Oct 2020 12:02:20 +0000 (14:02 +0200)]
tools/include: adjust x86-specific population of xen/
There's no need to use a shell loop construct here - ln's destination
can be specified as just the intended directory, as we don't mean to
change the names of any of the files. Also drop XEN_LIB_X86_INCLUDES for
having a single use only, and don't pass -f to ln, to allow noticing
name collisions right away.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Tested-by: Bertrand Marquis <bertrand.marquis@arm.com> Acked-by: Wei Liu <wl@xen.org>
Jan Beulich [Wed, 14 Oct 2020 12:01:43 +0000 (14:01 +0200)]
tools/include: adjust population of public headers into xen/
Use a wildcard also for the subdirectories, drop XEN_PUBLIC_INCLUDES for
having a single use only, and don't pass -f to ln to allow noticing name
collisions right away, and add trailing slashes to ln's destination.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Tested-by: Bertrand Marquis <bertrand.marquis@arm.com> Acked-by: Wei Liu <wl@xen.org>
Jan Beulich [Wed, 14 Oct 2020 12:01:25 +0000 (14:01 +0200)]
tools/include: fix (drop) dependencies of when to populate xen/
Making the population of xen/ depend on the time stamps of a subset of
the headers put there is error prone. The creation of a few dozen
symlinks doesn't take a meaningful amount of time (compared to the
overall building of tools/), and hence - to be on the safe side - should
simply be done always. Convert the goal to a phony one and drop its
dependencies, effectively taking further what 8d8d7d6b3dc1 ("tools: fix
linking hypervisor includes to tools include directory") had already
attempted.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Tested-by: Bertrand Marquis <bertrand.marquis@arm.com> Acked-by: Wei Liu <wl@xen.org>
Jan Beulich [Wed, 14 Oct 2020 12:01:00 +0000 (14:01 +0200)]
tools/include: adjust population of acpi/
Limit what gets exposed - in particular cpufreq/ and apei.h are
hypervisor private headers which the tool stack building shouldn't see
or use. Also don't pass -f to ln, to allow noticing name collisions
right away.
Additionally acpi/ also has been in need of deleting at the start of
the rule, or alternatively the respective ln would have needed to also
be passed -n.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Tested-by: Bertrand Marquis <bertrand.marquis@arm.com> Acked-by: Wei Liu <wl@xen.org>
Modify Makefiles using $(XEN_ROOT)/xen to use $(BASEDIR) instead.
This is removing the dependency to xen subdirectory preventing using a
wrong configuration file when xen subdirectory is duplicated for
compilation tests.
BASEDIR is set in xen/lib/x86/Makefile as this Makefile is directly
called from the tools build and install process and BASEDIR is not set
there.
Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com> Acked-by: Jan Beulich <jbeulich@suse.com> Acked-by: Wei Liu <wl@xen.org>
There is a standard format for generated Go code header comments, as set
by [1]. Modify gengotypes.py to follow this standard, and use the
additional
// source: <IDL file basename>
convention used by protoc-gen-go.
This change is motivated by the fact that since 41aea82de2, the comment
would include the absolute path to libxl_types.idl, therefore creating
unintended diffs when generating code across different machines. This
approach fixes that problem.
[1] https://github.com/golang/go/issues/13560
Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com> Reviewed-by: George Dunlap <george.dunlap@citrix.com>
Nick Rosbrook [Sun, 11 Oct 2020 23:31:24 +0000 (19:31 -0400)]
golang/xenlight: do not hard code libxl dir in gengotypes.py
Currently, in order to 'import idl' in gengotypes.py, we derive the path
of the libxl source directory from the XEN_ROOT environment variable, and
append that to sys.path so python can see idl.py. Since the the recent move of
libxl to tools/libs/light, this hard coding breaks the build.
Instead, check for the environment variable LIBXL_SRC_DIR, but move this
check to a try-except block (with empty except). This simply makes the
real error more visible, and does not strictly require that
LIBXL_SRC_DIR is used. Finally, update the Makefile to set LIBXL_SRC_DIR
rather than XEN_ROOT when calling gengotypes.py.
Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com> Reviewed-by: George Dunlap <george.dunlap@citrix.com>
Jason Andryuk [Thu, 1 Oct 2020 23:53:37 +0000 (19:53 -0400)]
libxl: only query VNC when enabled
QEMU without VNC support (configure --disable-vnc) will return an error
when VNC is queried over QMP since it does not recognize the QMP
command. This will cause libxl to fail starting the domain even if VNC
is not enabled. Therefore only query QEMU for VNC support when using
VNC, so a VNC-less QEMU will function in this configuration.
'goto out' jumps to the call to device_model_postconfig_done(), the
final callback after the chain of vnc queries. This bypasses all the
QMP VNC queries.
Signed-off-by: Jason Andryuk <jandryuk@gmail.com> Acked-by: Wei Liu <wl@xen.org>
Jan Beulich [Fri, 2 Oct 2020 10:30:34 +0000 (12:30 +0200)]
x86/vLAPIC: don't leak regs page from vlapic_init() upon error
Fixes: 8a981e0bf25e ("Make map_domain_page_global fail") Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
configuration file, the Linux kernel and initrd, as well as the XSM,
and architectural specific files into a single "unified" EFI executable.
This allows an administrator to update the components independently
without requiring rebuilding xen, as well as to replace the components
in an existing image.
The resulting EFI executable can be invoked directly from the UEFI Boot
Manager, removing the need to use a separate loader like grub as well
as removing dependencies on local filesystem access. And since it is
a single file, it can be signed and validated by UEFI Secure Boot without
requring the shim protocol.
It is inspired by systemd-boot's unified kernel technique and borrows the
function to locate PE sections from systemd's LGPL'ed code. During EFI
boot, Xen looks at its own loaded image to locate the PE sections for
the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
(`.ramdisk`), and XSM config (`.xsm`), which are included after building
xen.efi using objcopy to add named sections for each input file.
For x86, the CPU ucode can be included in a section named `.ucode`,
which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
On ARM systems the Device Tree can be included in a section named
`.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
the boot process.
Note that the system will fall back to loading files from disk if
the named sections do not exist. This allows distributions to continue
with the status quo if they want a signed kernel + config, while still
allowing a user provided initrd (which is how the shim protocol currently
works as well).
This patch also adds constness to the section parameter of
efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
changes pe_find_section() to use a const CHAR16 section name,
and adds pe_name_compare() to match section names.
Signed-off-by: Trammell Hudson <hudson@trmm.net> Reviewed-by: Jan Beulich <jbeulich@suse.com>
[Fix ARM build by including pe.init.o] Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Trammell Hudson [Fri, 2 Oct 2020 11:18:19 +0000 (07:18 -0400)]
efi/boot.c: add file.need_to_free
The config file, kernel, initrd, etc should only be freed if they
are allocated with the UEFI allocator. On x86 the ucode, and on
ARM the dtb, are also marked as need_to_free when allocated or
expanded.
This also fixes a memory leak in ARM fdt_increase_size() if there
is an error in building the new device tree.
Signed-off-by: Trammell Hudson <hudson@trmm.net> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Andrew Cooper [Wed, 1 Apr 2020 14:51:08 +0000 (15:51 +0100)]
x86/ucode: Trivial further cleanup
* Drop unused include in private.h.
* Used explicit width integers for Intel header fields.
* Adjust comment to better describe the extended header.
* Drop unnecessary __packed attribute for AMD header.
* Fix types and style.
No functional change.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Andrew Cooper [Fri, 2 Oct 2020 11:20:44 +0000 (12:20 +0100)]
x86/hvm: Correct error message in check_segment()
The error message is wrong (given AMD's older interpretation of what a NUL
segment should contain, attribute wise), and actively unhelpful because you
only get it in response to a hypercall where the one piece of information you
cannot provide is the segment selector.
Fix the message to talk about segment attributes, rather than the selector.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com>