Paul Durrant [Tue, 4 Aug 2020 08:59:03 +0000 (09:59 +0100)]
vtd: use a bit field for dma_pte
As with a prior patch for context_entry, this removes the need for much
shifting, masking and several magic numbers.
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
--- Cc: Kevin Tian <kevin.tian@intel.com>
v10:
- Remove macros in favour of direct field access
- Adjust field types
- Use write_atomic() to update the live PTE
Paul Durrant [Mon, 3 Aug 2020 16:00:49 +0000 (17:00 +0100)]
vtd: use a bit field for context_entry
This removes the need for much shifting, masking and several magic numbers.
On the whole it makes the code quite a bit more readable.
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
--- Cc: Kevin Tian <kevin.tian@intel.com>
v10:
- Remove macros in favour of direct field access
- Adjust field types
- Add missing barriers
Paul Durrant [Mon, 3 Aug 2020 15:30:04 +0000 (16:30 +0100)]
vtd: use a bit field for root_entry
This makes the code a little easier to read and also makes it more consistent
with iremap_entry.
Also take the opportunity to tidy up the implementation of device_in_domain().
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
--- Cc: Kevin Tian <kevin.tian@intel.com>
v10:
- Small tweaks requested by Jan
- Remove macros in favour of direct field access
- Add missing barrier
It's confusing and not consistent with the terminology introduced with 'dfn_t'.
Just call them IOMMU page tables.
Also remove a pointless check of the 'acpi_drhd_units' list in
vtd_dump_page_table_level(). If the list is empty then IOMMU mappings would
not have been enabled for the domain in the first place.
NOTE: All calls to printk() have also been removed from
iommu_dump_page_tables(); the implementation specific code is now
responsible for all output.
The check for the global 'iommu_enabled' has also been replaced by an
ASSERT since iommu_dump_page_tables() is not registered as a key handler
unless IOMMU mappings are enabled.
Error messages are now prefixed with the name of the function.
Signed-off-by: Paul Durrant <pdurrant@amazon.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
--- Cc: Andrew Cooper <andrew.cooper3@citrix.com>
v6:
- Cosmetic adjustment
- Drop use of __func__
v5:
- Make sure domain id is in the output
- Use VTDPREFIX in output for consistency
v2:
- Moved all output into implementation specific code
Paul Durrant [Thu, 16 Jul 2020 15:22:41 +0000 (16:22 +0100)]
iommu: remove the share_p2m operation
Sharing of HAP tables is now VT-d specific so the operation is never defined
for AMD IOMMU any more. There's also no need to pro-actively set vtd.pgd_maddr
when using shared EPT as it is straightforward to simply define a helper
function to return the appropriate value in the shared and non-shared cases.
NOTE: This patch also modifies unmap_vtd_domain_page() to take a const
pointer since the only thing it calls, unmap_domain_page(), also takes
a const pointer.
Signed-off-by: Paul Durrant <pdurrant@amazon.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
--- Cc: Andrew Cooper <andrew.cooper3@citrix.com> Cc: George Dunlap <george.dunlap@citrix.com> Cc: Wei Liu <wl@xen.org> Cc: "Roger Pau Monné" <roger.pau@citrix.com>
v6:
- Adjust code to return P2M paddr
- Add removed comment back in
v5:
- Pass 'nr_pt_levels' into domain_pgd_maddr() directly
v2:
- Put the PGD level adjust into the helper function too, since it is
irrelevant in the shared EPT case
Paul Durrant [Mon, 3 Aug 2020 09:39:24 +0000 (10:39 +0100)]
common/grant_table: batch flush I/O TLB
This patch avoids calling iommu_iotlb_flush() for each individual GNTTABOP and
instead calls iommu_iotlb_flush_all() at the end of a batch. This should mean
non-singleton map/unmap operations perform better.
NOTE: A batch is the number of operations done before a pre-emption check and,
in the case of unmap, a TLB flush.
Suggested-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: Paul Durrant <pdurrant@amazon.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Julien Grall <julien@xen.org> Reviewed-by: Wei Liu <wl@xen.org>
--- Cc: Andrew Cooper <andrew.cooper3@citrix.com> Cc: George Dunlap <george.dunlap@citrix.com> Cc: Ian Jackson <ian.jackson@eu.citrix.com> Cc: Stefano Stabellini <sstabellini@kernel.org>
v6:
- Fix spelling of 'preemption'
- Drop unneeded 'currd' stack variable
v5:
- Add batching to gnttab_map_grant_ref() to handle flushing before pre-
emption check
- Maintain per-op flushing in the case of singletons
Paul Durrant [Fri, 17 Jul 2020 14:45:13 +0000 (15:45 +0100)]
remove remaining uses of iommu_legacy_map/unmap
The 'legacy' functions do implicit flushing so amend the callers to do the
appropriate flushing.
Unfortunately, because of the structure of the P2M code, we cannot remove
the per-CPU 'iommu_dont_flush_iotlb' global and the optimization it
facilitates. Checking of this flag is now done only in relevant callers of
iommu_iotlb_flush(). Also, 'iommu_dont_flush_iotlb' is now declared
as bool (rather than bool_t) and setting/clearing it are no longer pointlessly
gated on is_iommu_enabled() returning true. (Arguably it is also pointless to
gate the call to iommu_iotlb_flush() on that condition - since it is a no-op
in that case - but the if clause allows the scope of a stack variable to be
restricted).
NOTE: The code in memory_add() now sets 'ret' if iommu_map() or
iommu_iotlb_flush() fails.
Signed-off-by: Paul Durrant <pdurrant@amazon.com> Acked-by: Julien Grall <jgrall@amazon.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
--- Cc: Jan Beulich <jbeulich@suse.com> Cc: Andrew Cooper <andrew.cooper3@citrix.com> Cc: Wei Liu <wl@xen.org> Cc: "Roger Pau Monné" <roger.pau@citrix.com> Cc: George Dunlap <george.dunlap@citrix.com> Cc: Ian Jackson <ian.jackson@eu.citrix.com> Cc: Julien Grall <jgrall@amazon.com> Cc: Stefano Stabellini <sstabellini@kernel.org> Cc: Jun Nakajima <jun.nakajima@intel.com>
v10:
- Re-base
v9:
- Moved check of 'iommu_dont_flush_iotlb' out of iommu_iotlb_flush() and
into callers to avoid re-introducing a problem on Arm
- Dropped Jan's R-b due to change
v6:
- Fix formatting problem in memory_add()
v5:
- Re-base
- Removed failure case on overflow of unsigned int as it is no longer
necessary
v3:
- Same as v2; elected to implement batch flushing in the grant table code as
a subsequent patch
v2:
- Shorten the diff (mainly because of a prior patch introducing automatic
flush-on-fail into iommu_map() and iommu_unmap())
Jan Beulich [Fri, 20 Nov 2020 07:28:58 +0000 (08:28 +0100)]
AMD/IOMMU: avoid UB in guest CR3 retrieval
Found by looking for patterns similar to the one Julien did spot in
pci_vtd_quirks(). (Not that it matters much here, considering the code
is dead right now.)
Fixes: 3a7947b69011 ("amd-iommu: use a bitfield for DTE") Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Jan Beulich [Fri, 20 Nov 2020 07:25:17 +0000 (08:25 +0100)]
lib: split _ctype[] into its own object, under lib/
This is, besides for tidying, in preparation of then starting to use an
archive rather than an object file for generic library code which
arch-es (or even specific configurations within a single arch) may or
may not need.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Julien Grall <jgrall@amazon.com>
Julien Grall [Thu, 19 Nov 2020 17:08:27 +0000 (17:08 +0000)]
xen/arm: gic: acpi: Guard helpers to build the MADT with CONFIG_ACPI
gic_make_hwdom_madt() and gic_get_hwdom_madt_size() are ACPI specific.
While they build fine today, this will change in a follow-up patch.
Rather than trying to fix the build on ACPI, it is best to avoid
compiling the helpers and the associated callbacks when CONFIG_ACPI=n.
On CentOS 8 with SELinux containerize doesn't work at all:
Make sure that the source code and SSH agent directories are passed on
with SELinux relabeling enabled.
(`-security-opt label=disabled` would be another option)
Signed-off-by: Edwin Török <edvin.torok@citrix.com> Acked-by: Doug Goldstein <cardoe@cardoe.com>
Michal Orzel [Mon, 16 Nov 2020 12:11:40 +0000 (13:11 +0100)]
xen/arm: Add workaround for Cortex-A76/Neoverse-N1 erratum #1286807
On the affected Cortex-A76/Neoverse-N1 cores (r0p0 to r3p0),
if a virtual address for a cacheable mapping of a location is being
accessed by a core while another core is remapping the virtual
address to a new physical page using the recommended break-before-make
sequence, then under very rare circumstances TLBI+DSB completes before
a read using the translation being invalidated has been observed by
other observers. The workaround repeats the TLBI+DSB operation for all
the TLB flush operations. While this is stricly not necessary, we don't
want to take any risk.
Juergen Gross [Wed, 18 Nov 2020 11:38:29 +0000 (12:38 +0100)]
xen/x86: add nmi continuation framework
Actions in NMI context are rather limited as e.g. locking is rather
fragile.
Add a framework to continue processing in normal interrupt context
after leaving NMI processing.
This is done by a high priority interrupt vector triggered via a
self IPI from NMI context, which will then call the continuation
function specified during NMI handling.
Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Jan Beulich [Wed, 18 Nov 2020 11:38:01 +0000 (12:38 +0100)]
x86/vpt: fix build with old gcc
I believe it was the XSA-336 fix (42fcdd42328f "x86/vpt: fix race when
migrating timers between vCPUs") which has unmasked a bogus
uninitialized variable warning. This is observable with gcc 4.3.4, but
only on 4.13 and older; it's hidden on newer versions apparently due to
the addition to _read_unlock() done by 12509bbeb9e3 ("rwlocks: call
preempt_disable() when taking a rwlock").
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Jan Beulich [Wed, 18 Nov 2020 11:37:24 +0000 (12:37 +0100)]
x86/p2m: split write_p2m_entry() hook
Fair parts of the present handlers are identical; in fact
nestedp2m_write_p2m_entry() lacks a call to p2m_entry_modify(). Move
common parts right into write_p2m_entry(), splitting the hooks into a
"pre" one (needed just by shadow code) and a "post" one.
For the common parts moved I think that the p2m_flush_nestedp2m() is,
at least from an abstract perspective, also applicable in the shadow
case. Hence it doesn't get a 3rd hook put in place.
The initial comment that was in hap_write_p2m_entry() gets dropped: Its
placement was bogus, and looking back the the commit introducing it
(dd6de3ab9985 "Implement Nested-on-Nested") I can't see either what use
of a p2m it was meant to be associated with.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Tim Deegan <tim@xen.org> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Jan Beulich [Wed, 18 Nov 2020 11:34:54 +0000 (12:34 +0100)]
x86/HAP: move nested-P2M flush calculations out of locked region
By latching the old MFN into a local variable, these calculations don't
depend on anything but local variables anymore. Hence the point in time
when they get performed doesn't matter anymore, so they can be moved
past the locked region.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Jan Beulich [Wed, 18 Nov 2020 11:33:18 +0000 (12:33 +0100)]
x86/p2m: collapse the two ->write_p2m_entry() hooks
The struct paging_mode instances get set to the same functions
regardless of mode by both HAP and shadow code, hence there's no point
having this hook there. The hook also doesn't need moving elsewhere - we
can directly use struct p2m_domain's. This merely requires (from a
strictly formal pov; in practice this may not even be needed) making
sure we don't end up using safe_write_pte() for nested P2Ms.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Tim Deegan <tim@xen.org> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Penny Zheng [Mon, 9 Nov 2020 08:21:10 +0000 (16:21 +0800)]
xen/arm: Add Cortex-A73 erratum 858921 workaround
CNTVCT_EL0 or CNTPCT_EL0 counter read in Cortex-A73 (all versions)
might return a wrong value when the counter crosses a 32bit boundary.
Until now, there is no case for Xen itself to access CNTVCT_EL0,
and it also should be the Guest OS's responsibility to deal with
this part.
But for CNTPCT, there exists several cases in Xen involving reading
CNTPCT, so a possible workaround is that performing the read twice,
and to return one or the other depending on whether a transition has
taken place.
Jan Beulich [Wed, 11 Nov 2020 07:57:32 +0000 (08:57 +0100)]
x86/p2m: paging_write_p2m_entry() is a private function
As it gets installed by p2m_pt_init(), it doesn't need to live in
paging.c. The function working in terms of l1_pgentry_t even further
indicates its non-paging-generic nature. Move it and drop its
paging_ prefix, not adding any new one now that it's static.
This then also makes more obvious that in the EPT case we wouldn't
risk mistakenly calling through the NULL hook pointer.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Julien Grall [Mon, 9 Nov 2020 20:28:59 +0000 (20:28 +0000)]
xen/arm: Always trap AMU system registers
The Activity Monitors Unit (AMU) has been introduced by ARMv8.4. It is
considered to be unsafe to be expose to guests as they might expose
information about code executed by other guests or the host.
Arm provided a way to trap all the AMU system registers by setting
CPTR_EL2.TAM to 1.
Unfortunately, on older revision of the specification, the bit 30 (now
CPTR_EL1.TAM) was RES0. Because of that, Xen is setting it to 0 and
therefore the system registers would be exposed to the guest when it is
run on processors with AMU.
As the bit is mark as UNKNOWN at boot in Armv8.4, the only safe solution
for us is to always set CPTR_EL1.TAM to 1.
Guest trying to access the AMU system registers will now receive an
undefined instruction. Unfortunately, this means that even well-behaved
guest may fail to boot because we don't sanitize the ID registers.
This is a known issues with other Armv8.0+ features (e.g. SVE, Pointer
Auth). This will taken care separately.
Jan Beulich [Tue, 10 Nov 2020 13:39:03 +0000 (14:39 +0100)]
x86/CPUID: don't use UB shift when library is built as 32-bit
At least the insn emulator test harness will continue to be buildable
(and ought to continue to be usable) also as a 32-bit binary. (Right now
the CPU policy test harness is, too, but there it may be less relevant
to keep it functional, just like e.g. we don't support fuzzing the insn
emulator in 32-bit mode.) Hence the library code needs to cope with
this.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
With the event channel lock no longer disabling interrupts commit 52e1fc47abc3a0123 ("evtchn/Flask: pre-allocate node on send path") can
be reverted again.
Signed-off-by: Juergen Gross <jgross@suse.com> Acked-by: Jan Beulich <jbeulich@suse.com>
Juergen Gross [Tue, 10 Nov 2020 13:36:15 +0000 (14:36 +0100)]
xen/evtchn: rework per event channel lock
Currently the lock for a single event channel needs to be taken with
interrupts off, which causes deadlocks in some cases.
Rework the per event channel lock to be non-blocking for the case of
sending an event and removing the need for disabling interrupts for
taking the lock.
The lock is needed for avoiding races between event channel state
changes (creation, closing, binding) against normal operations (set
pending, [un]masking, priority changes).
Use a rwlock, but with some restrictions:
- Changing the state of an event channel (creation, closing, binding)
needs to use write_lock(), with ASSERT()ing that the lock is taken as
writer only when the state of the event channel is either before or
after the locked region appropriate (either free or unbound).
- Sending an event needs to use read_trylock() mostly, in case of not
obtaining the lock the operation is omitted. This is needed as
sending an event can happen with interrupts off (at least in some
cases).
- Dumping the event channel state for debug purposes is using
read_trylock(), too, in order to avoid blocking in case the lock is
taken as writer for a long time.
- All other cases can use read_lock().
Fixes: e045199c7c9c54 ("evtchn: address races with evtchn_reset()") Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Julien Grall <jgrall@amazon.com>
Roger Pau Monné [Tue, 6 Oct 2020 16:23:27 +0000 (18:23 +0200)]
x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL}
Currently a PV hardware domain can also be given control over the CPU
frequency, and such guest is allowed to write to MSR_IA32_PERF_CTL.
However since commit 322ec7c89f6 the default behavior has been changed
to reject accesses to not explicitly handled MSRs, preventing PV
guests that manage CPU frequency from reading
MSR_IA32_PERF_{STATUS/CTL}.
Additionally some HVM guests (Windows at least) will attempt to read
MSR_IA32_PERF_CTL and will panic if given back a #GP fault:
Move the handling of MSR_IA32_PERF_{STATUS/CTL} to the common MSR
handling shared between HVM and PV guests, and add an explicit case
for reads to MSR_IA32_PERF_{STATUS/CTL}.
Restore previous behavior and allow PV guests with the required
permissions to read the contents of the mentioned MSRs. Non privileged
guests will get 0 when trying to read those registers, as writes to
MSR_IA32_PERF_CTL by such guest will already be silently dropped.
Fixes: 322ec7c89f6 ('x86/pv: disallow access to unknown MSRs') Fixes: 84e848fd7a1 ('x86/hvm: disallow access to unknown MSRs') Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Jason Andryuk [Thu, 29 Oct 2020 19:03:32 +0000 (15:03 -0400)]
libxl: Add suppress-vmdesc to QEMU machine
The device model state saved by QMP xen-save-devices-state doesn't
include the vmdesc json. When restoring an HVM, xen-load-devices-state
always triggers "Expected vmdescription section, but got 0". This is
not a problem when restore comes from a file. However, when QEMU runs
in a linux stubdom and comes over a console, EOF is not received. This
causes a delay restoring - though it does restore.
Setting suppress-vmdesc skips looking for the vmdesc during restore and
avoids the wait.
QEMU 5.2 enables suppress-vmdesc by default for xenfv, but this change
sets it manually for xenfv and xen_platform_pci=0 when -machine pc is
use.
QEMU commit 9850c6047b8b "migration: Allow to suppress vmdesc
submission" added suppress-vmdesc in QEMU 2.3.
Signed-off-by: Jason Andryuk <jandryuk@gmail.com> Acked-by: Anthony PERARD <anthony.perard@citrix.com>
Setting vuart_gfn was missed when switching ARM guests to the PVH build.
Like libxl__build_pv, libxl__build_hvm should set state->vuart_gfn to
dom->vuart_gfn.
Without this change, xl console cannot connect to the vuart console (-t
vuart), see https://marc.info/?l=xen-devel&m=160402342101366.
Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com> Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
Juergen Gross [Fri, 6 Nov 2020 09:47:09 +0000 (10:47 +0100)]
xen/locking: harmonize spinlocks and rwlocks regarding preemption
Spinlocks and rwlocks behave differently in the try variants regarding
preemption: rwlocks are switching preemption off before testing the
lock, while spinlocks do so only after the first check.
Modify _spin_trylock() to disable preemption before testing the lock
to be held in order to be preemption-ready.
Jan Beulich [Thu, 5 Nov 2020 15:48:55 +0000 (16:48 +0100)]
libxl: fix libacpi dependency
$(DSDT_FILES-y) depends on the recursive make to have run in libacpi/
such that the file(s) itself/themselves were generated before
compilation gets attempted. The same, however, is also necessary for
generated headers, before source files including them would get
attempted to be compiled.
The dependency specified in libacpi's Makefile, otoh, is entirely
pointless nowadays - no compilation happens there anymore (except for
tools involved in building the generated files). Together with it, the
rule generating acpi.a also can go away.
Reported-by: Olaf Hering <olaf@aepfle.de> Fixes: 14c0d328da2b ("libxl/acpi: Build ACPI tables for HVMlite guests") Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Wei Liu <wl@xen.org>
Juergen Gross [Wed, 4 Nov 2020 08:26:42 +0000 (09:26 +0100)]
xen/spinlocks: spin_trylock with interrupts off is always fine
Even if a spinlock was taken with interrupts on before calling
spin_trylock() with interrupts off is fine, as it can't block.
Add a bool parameter "try" to check_lock() for handling this case.
Remove the call of check_lock() from _spin_is_locked(), as it really
serves no purpose and it can even lead to false crashes, e.g. when
a lock was taken correctly with interrupts enabled and the call of
_spin_is_locked() happened with interrupts off. In case the lock is
taken with wrong interrupt flags this will be catched when taking
the lock.
Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Ian Jackson [Wed, 19 Aug 2020 17:31:45 +0000 (18:31 +0100)]
SUPPORT.md: Desupport qemu trad except stub dm
While investigating XSA-335 we discovered that many upstream security
fixes were missing. It is not practical to backport them. There is
no good reason to be running this very ancient version of qemu, except
that it is the only way to run a stub dm which is currently supported
by upstream.
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
The BAD_MADT_ENTRY() macro is designed to work for all of the subtables
of the MADT. In the ACPI 5.1 version of the spec, the struct for the
GICC subtable (struct acpi_madt_generic_interrupt) is 76 bytes long; in
ACPI 6.0, the struct is 80 bytes long. But, there is only one definition
in ACPICA for this struct -- and that is the 6.0 version. Hence, when
BAD_MADT_ENTRY() compares the struct size to the length in the GICC
subtable, it fails if 5.1 structs are in use, and there are systems in
the wild that have them.
This patch adds the BAD_MADT_GICC_ENTRY() that checks the GICC subtable
only, accounting for the difference in specification versions that are
possible. The BAD_MADT_ENTRY() will continue to work as is for all other
MADT subtables.
This code is being added to an arm64 header file since that is currently
the only architecture using the GICC subtable of the MADT. As a GIC is
specific to ARM, it is also unlikely the subtable will be used elsewhere.
Fixes: aeb823bbacc2 ("ACPICA: ACPI 6.0: Add changes for FADT table.") Signed-off-by: Al Stone <al.stone@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: "Rafael J. Wysocki" <rjw@rjwysocki.net>
[catalin.marinas@arm.com: extra brackets around macro arguments] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Julien Grall <julien.grall@arm.com> Signed-off-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Julien Grall <jgrall@amazon.com> Acked-by: Stefano Stabellini <sstabellini@kernel.org> Tested-by: Elliott Mitchell <ehem+xen@m5p.com>
xen/arm: Check if the platform is not using ACPI before initializing Dom0less
Dom0less requires a device-tree. However, since commit 6e3e77120378
"xen/arm: setup: Relocate the Device-Tree later on in the boot", the
device-tree will not get unflatten when using ACPI.
This will lead to a crash during boot.
Given the complexity to setup dom0less with ACPI (for instance how to
assign device?), we should skip any code related to Dom0less when using
ACPI.
xen/arm: acpi: The fixmap area should always be cleared during failure/unmap
Commit 022387ee1ad3 "xen/arm: mm: Don't open-code Xen PT update in
{set, clear}_fixmap()" enforced that each set_fixmap() should be
paired with a clear_fixmap(). Any failure to follow the model would
result to a platform crash.
Unfortunately, the use of fixmap in the ACPI code was overlooked as it
is calling set_fixmap() but not clear_fixmap().
The function __acpi_os_map_table() is reworked so:
- We know before the mapping whether the fixmap region is big
enough for the mapping.
- It will fail if the fixmap is already in use. This is not a
change of behavior but clarifying the current expectation to avoid
hitting a BUG().
The function __acpi_os_unmap_table() will now call clear_fixmap().
xen/acpi: Rework acpi_os_map_memory() and acpi_os_unmap_memory()
The functions acpi_os_{un,}map_memory() are meant to be arch-agnostic
while the __acpi_os_{un,}map_memory() are meant to be arch-specific.
Currently, the former are still containing x86 specific code.
To avoid this rather strange split, the generic helpers are reworked so
they are arch-agnostic. This requires the introduction of a new helper
__acpi_os_unmap_memory() that will undo any mapping done by
__acpi_os_map_memory().
Currently, the arch-helper for unmap is basically a no-op so it only
returns whether the mapping was arch specific. But this will change
in the future.
Note that the x86 version of acpi_os_map_memory() was already able to
able the 1MB region. Hence why there is no addition of new code.
Jan Beulich [Fri, 30 Oct 2020 13:30:35 +0000 (14:30 +0100)]
x86: fix build of PV shim when !GRANT_TABLE
To do its compat translation, shim code needs to include the compat
header. For this to work, this header first of all needs to be
generated.
Reported-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Roger Pau Monné [Fri, 30 Oct 2020 13:28:03 +0000 (14:28 +0100)]
x86/hvm: process softirq while saving/loading entries
On slow systems with sync_console saving or loading the context of big
guests can cause the watchdog to trigger. Fix this by adding a couple
of process_pending_softirqs.
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com>
Jan Beulich [Fri, 30 Oct 2020 13:27:23 +0000 (14:27 +0100)]
x86/shadow: correct GFN use by sh_unshadow_for_p2m_change()
Luckily sh_remove_all_mappings()'s use of the parameter is limited to
generation of log messages. Nevertheless we'd better pass correct GFNs
around:
- the incoming GFN, when replacing a large page, may not be large page
aligned,
- incrementing by page-size-scaled values can't be right.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Tim Deegan <tim@xen.org>
Jan Beulich [Fri, 30 Oct 2020 13:26:46 +0000 (14:26 +0100)]
x86/shadow: sh_{make,destroy}_monitor_table() are "even more" HVM-only
With them depending on just the number of shadow levels, there's no need
for more than one instance of them, and hence no need for any hook (IOW 452219e24648 ["x86/shadow: monitor table is HVM-only"] didn't go quite
far enough). Move the functions to hvm.c while dropping the dead
is_pv_32bit_domain() code paths.
While moving the code, replace a stale comment reference to
sh_install_xen_entries_in_l4(). Doing so made me notice the function
also didn't have its prototype dropped in 8d7b633adab7 ("x86/mm:
Consolidate all Xen L4 slot writing into init_xen_l4_slots()"), which
gets done here as well.
Also make their first parameters const.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Tim Deegan <tim@xen.org>
Bertrand Marquis [Mon, 26 Oct 2020 16:21:33 +0000 (16:21 +0000)]
xen/arm: Warn user on cpu errata 832075
When a Cortex A57 processor is affected by CPU errata 832075, a guest
not implementing the workaround for it could deadlock the system.
Add a warning during boot informing the user that only trusted guests
should be executed on the system.
An equivalent warning is already given to the user by KVM on cores
affected by this errata.
Also taint the hypervisor as unsecure when this errata applies and
mention Cortex A57 r0p0 - r1p2 as not security supported in SUPPORT.md
Andrew Cooper [Thu, 29 Oct 2020 12:03:43 +0000 (12:03 +0000)]
x86/pv: Drop stale comment in dom0_construct_pv()
This comment was introduced by c/s 22a857bde9b8 in 2003, and became stale with
c/s 99db02d50976 also in 2003. Both of these predate the introduction of
struct vcpu, when the processor field moved object.
17 years is long enough for this comment to be mis-informing people reading
the code.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com>
Andrew Cooper [Mon, 19 Oct 2020 14:51:22 +0000 (15:51 +0100)]
x86/pv: Flush TLB in response to paging structure changes
With MMU_UPDATE, a PV guest can make changes to higher level pagetables. This
is safe from Xen's point of view (as the update only affects guest mappings),
and the guest is required to flush (if necessary) after making updates.
However, Xen's use of linear pagetables (UPDATE_VA_MAPPING, GNTTABOP_map,
writeable pagetables, etc.) is an implementation detail outside of the
API/ABI.
Changes in the paging structure require invalidations in the linear pagetable
range for subsequent accesses into the linear pagetables to access non-stale
mappings. Xen must provide suitable flushing to prevent intermixed guest
actions from accidentally accessing/modifying the wrong pagetable.
For all L2 and higher modifications, flush the TLB. PV guests cannot create
L2 or higher entries with the Global bit set, so no mappings established in
the linear range can be global. (This could in principle be an order 39 flush
starting at LINEAR_PT_VIRT_START, but no such mechanism exists in practice.)
Express the necessary flushes as a set of booleans which accumulate across the
operation. Comment the flushing logic extensively.
This is XSA-286.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Andrew Cooper [Thu, 22 Oct 2020 10:28:58 +0000 (11:28 +0100)]
x86/pv: Drop FLUSH_TLB_GLOBAL in do_mmu_update() for XPTI
c/s 9d1d31ad9498 "x86: slightly reduce Meltdown band-aid overhead" removed the
use of Global TLB flushes on the Xen entry path, but added a FLUSH_TLB_GLOBAL
to the L4 path in do_mmu_update().
However, this was unnecessary.
It is the guests responsibility to perform appropriate TLB flushing if the L4
modification altered an established mapping in a flush-relevant way. In this
case, an MMUEXT_OP hypercall will follow. The case which Xen needs to cover
is when new mappings are created, and the resync on the exit-to-guest path
covers this correctly.
There is a corner case with multiple vCPUs in hypercalls at the same time,
which 9d1d31ad9498 changed, and this patch changes back to its original XPTI
behaviour.
Architecturally, established TLB entries can continue to be used until the
broadcast flush has completed. Therefore, even with concurrent hypercalls,
the guest cannot depend on older mappings not being used until an MMUEXT_OP
hypercall completes. Xen's implementation of guest-initiated flushes will
take correct effect on top of an in-progress hypercall, picking up new mapping
setting before the other vCPU's MMUEXT_OP completes.
Note: The correctness of this change is not impacted by whether XPTI uses
global mappings or not. Correctness there depends on the behaviour of Xen on
the entry/exit paths when switching two/from the XPTI "shadow" pagetables.
This is (not really) XSA-286 (but necessary to simplify the logic).
Fixes: 9d1d31ad9498 ("x86: slightly reduce Meltdown band-aid overhead") Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Jan Beulich [Mon, 26 Oct 2020 13:38:35 +0000 (14:38 +0100)]
AMD/IOMMU: correct shattering of super pages
Fill the new page table _before_ installing into a live page table
hierarchy, as installing a blank page first risks I/O faults on
sub-ranges of the original super page which aren't part of the range
for which mappings are being updated.
While at it also do away with mapping and unmapping the same fresh
intermediate page table page once per entry to be written.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Paul Durrant <paul@xen.org>
Jan Beulich [Fri, 23 Oct 2020 16:03:18 +0000 (18:03 +0200)]
x86emul: fix PINSRW and adjust other {,V}PINSR*
The use of simd_packed_int together with no further update to op_bytes
has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
aligned memory operand. Use simd_none instead and override it after
general decoding with simd_other, like is done for the B/D/Q siblings.
While benign, for consistency also use DstImplicit instead of DstReg
in x86_decode_twobyte().
PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
gets dropped.
For further consistency also
- use src.bytes instead of op_bytes in relevant memcpy() invocations,
- avoid the pointless updating of op_bytes (all we care about later is
that the value be less than 16).
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Juergen Gross [Mon, 19 Oct 2020 15:27:54 +0000 (17:27 +0200)]
tools/libs: move official headers to common directory
Instead of each library having an own include directory move the
official headers to tools/include instead. This will drop the need to
link those headers to tools/include and there is no need any longer
to have library-specific include paths when building Xen.
While at it remove setting of the unused variable
PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
Signed-off-by: Juergen Gross <jgross@suse.com> Acked-by: Christian Lindig <christian.lindig@citrix.com> Tested-by: Bertrand Marquis <bertrand.marquis@arm.com> Acked-by: Ian Jackson <iwj@xenproject.org>
Juergen Gross [Fri, 23 Oct 2020 13:53:10 +0000 (15:53 +0200)]
tools/init-xenstore-domain: support xenstore pvh stubdom
Instead of creating the xenstore-stubdom domain first and parsing the
kernel later do it the other way round. This enables to probe for the
domain type supported by the xenstore-stubdom and to support both, pv
and pvh type stubdoms.
Try to parse the stubdom image first for PV support, if this fails use
HVM. Then create the domain with the appropriate type selected.
Signed-off-by: Juergen Gross <jgross@suse.com> Acked-by: Wei Liu <wl@xen.org>
Jan Beulich [Fri, 23 Oct 2020 08:13:53 +0000 (10:13 +0200)]
x86emul: increase FPU save area in test harness/fuzzer
Running them on a system (or emulator) with AMX support requires this
to be quite a bit larger than 8k, to avoid triggering the assert() in
emul_test_init(). Bump all the way up to 16k right away.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Roger Pau Monné [Fri, 23 Oct 2020 08:13:14 +0000 (10:13 +0200)]
pci: cleanup MSI interrupts before removing device from IOMMU
Doing the MSI cleanup after removing the device from the IOMMU leads
to the following panic on AMD hardware:
Assertion 'table.ptr && (index < intremap_table_entries(table.ptr, iommu))' failed at iommu_intr.c:172
----[ Xen-4.13.1-10.0.3-d x86_64 debug=y Not tainted ]----
CPU: 3
RIP: e008:[<ffff82d08026ae3c>] drivers/passthrough/amd/iommu_intr.c#get_intremap_entry+0x52/0x7b
[...]
Xen call trace:
[<ffff82d08026ae3c>] R drivers/passthrough/amd/iommu_intr.c#get_intremap_entry+0x52/0x7b
[<ffff82d08026af25>] F drivers/passthrough/amd/iommu_intr.c#update_intremap_entry_from_msi_msg+0xc0/0x342
[<ffff82d08026ba65>] F amd_iommu_msi_msg_update_ire+0x98/0x129
[<ffff82d08025dd36>] F iommu_update_ire_from_msi+0x1e/0x21
[<ffff82d080286862>] F msi_free_irq+0x55/0x1a0
[<ffff82d080286f25>] F pci_cleanup_msi+0x8c/0xb0
[<ffff82d08025cf52>] F pci_remove_device+0x1af/0x2da
[<ffff82d0802a42d1>] F do_physdev_op+0xd18/0x1187
[<ffff82d080383925>] F pv_hypercall+0x1f5/0x567
[<ffff82d08038a432>] F lstar_enter+0x112/0x120
That's because the call to iommu_remove_device on AMD hardware will
remove the per-device interrupt remapping table, and hence the call to
pci_cleanup_msi done afterwards will find a null intremap table and
crash.
Reorder the calls so that MSI interrupts are torn down before removing
the device from the IOMMU.
Fixes: d7cfeb7c13ed ("AMD/IOMMU: don't blindly allocate interrupt remapping tables") Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Jan Beulich [Fri, 23 Oct 2020 08:12:31 +0000 (10:12 +0200)]
evtchn: let evtchn_set_priority() acquire the per-channel lock
Some lock wants to be held to make sure the port doesn't change state,
but there's no point holding the per-domain event lock here. Switch to
using the finer grained per-channel lock instead (albeit as a downside
for the time being this requires disabling interrupts for a short
period of time).
FAOD this doesn't guarantee anything towards in particular
evtchn_fifo_set_pending(), as for interdomain channels that function
would be called with the remote side's per-channel lock held.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Jan Beulich [Fri, 23 Oct 2020 08:11:46 +0000 (10:11 +0200)]
evtchn: rename and adjust guest_enabled_event()
The function isn't about an "event" in general, but about a vIRQ. The
function also failed to honor global vIRQ-s, instead assuming the caller
would pass vCPU 0 in such a case.
While at it also adjust the
- types the function uses,
- single user to make use of domain_vcpu().
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Jan Beulich [Fri, 23 Oct 2020 08:09:55 +0000 (10:09 +0200)]
evtchn: replace FIFO-specific header by generic private one
Having a FIFO specific header is not (or at least no longer) warranted
with just three function declarations left there. Introduce a private
header instead, moving there some further items from xen/event.h.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Jan Beulich [Fri, 23 Oct 2020 08:07:56 +0000 (10:07 +0200)]
evtchn: avoid race in get_xen_consumer()
There's no global lock around the updating of this global piece of data.
Make use of cmpxchgptr() to avoid two entities racing with their
updates.
While touching the functionality, mark xen_consumers[] read-mostly (or
else the if() condition could use the result of cmpxchgptr(), writing to
the slot unconditionally).
The use of cmpxchgptr() here points out (by way of clang warning about
it) that its original use of const was slightly wrong. Adjust the
placement, or else undefined behavior of const qualifying a function
type will result.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Jan Beulich [Fri, 23 Oct 2020 08:06:53 +0000 (10:06 +0200)]
IOMMU/EPT: avoid double flushing in shared page table case
While the flush coalescing optimization has been helping the non-shared
case, it has actually lead to double flushes in the shared case (which
ought to be the more common one nowadays at least): Once from
*_set_entry() and a second time up the call tree from wherever the
overriding flag gets played with. In alignment with XSA-346 suppress
flushing in this case.
Similarly avoid excessive setting of IOMMU_FLUSHF_added on the batched
flushes: "idx" hasn't been added a new mapping for.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Jan Beulich [Fri, 23 Oct 2020 08:06:20 +0000 (10:06 +0200)]
x86/mm: avoid playing with directmap when self-snoop can be relied upon
The set of systems affected by XSA-345 would have been smaller is we had
this in place already: When the processor is capable of dealing with
mismatched cacheability, there's no extra work we need to carry out.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Jan Beulich [Fri, 23 Oct 2020 08:05:29 +0000 (10:05 +0200)]
x86: XENMAPSPACE_gmfn{,_batch,_range} want to special case idx == gpfn
In this case up to now we've been freeing the page (through
guest_remove_page(), with the actual free typically happening at the
put_page() later in the function), but then failing the call on the
subsequent GFN consistency check. However, in my opinion such a request
should complete as an "expensive" no-op (leaving aside the potential
unsharing of the page).
This points out that f33d653f46f5 ("x86: replace bad ASSERT() in
xenmem_add_to_physmap_one()" would really have needed an XSA, despite
its description claiming otherwise, as in release builds we then put in
place a P2M entry referencing the about to be freed page.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Bertrand Marquis [Fri, 16 Oct 2020 13:58:47 +0000 (14:58 +0100)]
xen/arm: Print message if reset did not work
If for some reason the hardware reset is not working, print a message to
the user every 5 seconds to warn him that the system did not reset
properly and Xen is still looping.
The message is printed infinitely so that someone connecting to a serial
console with no history would see the message coming after 5 seconds.
arm: optee: don't print warning about "wrong" RPC buffer
The OP-TEE mediator tracks the cookie value of the last buffer which was
requested by OP-TEE. This tracked value serves one important purpose: if
OP-TEE wants to request another buffer, we know that it finished
importing the previous one and we can free the page list associated with
it.
Also, we had a false premise that OP_TEE frees requested buffers in
reversed order. So we checked rpc_data_cookie during handling of the
OPTEE_RPC_CMD_SHM_FREE call and printed a warning if the cookie of the
buffer which is requested to be freed differs from the last allocated
one.
During testing of RPMB FS services I discovered that RPMB code frees
request and response buffers in the same order is it allocated them. And
this is perfectly fine, actually.
So, we are removing mentioned warning message in Xen, as it is perfectly
normal to free buffers in arbitrary order.
Jan Beulich [Tue, 20 Oct 2020 12:23:12 +0000 (14:23 +0200)]
AMD/IOMMU: ensure suitable ordering of DTE modifications
DMA and interrupt translation should be enabled only after other
applicable DTE fields have been written. Similarly when disabling
translation or when moving a device between domains, translation should
first be disabled, before other entry fields get modified. Note however
that the "moving" aspect doesn't apply to the interrupt remapping side,
as domain specifics are maintained in the IRTEs here, not the DTE. We
also never disable interrupt remapping once it got enabled for a device
(the respective argument passed is always the immutable iommu_intremap).
This is part of XSA-347.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Paul Durrant <paul@xen.org>
Jan Beulich [Tue, 20 Oct 2020 12:22:52 +0000 (14:22 +0200)]
AMD/IOMMU: update live PTEs atomically
Updating a live PTE bitfield by bitfield risks the compiler re-ordering
the individual updates as well as splitting individual updates into
multiple memory writes. Construct the new entry fully in a local
variable, do the check to determine the flushing needs on the thus
established new entry, and then write the new entry by a single insn.
Similarly using memset() to clear a PTE is unsafe, as the order of
writes the function does is, at least in principle, undefined.
This is part of XSA-347.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Paul Durrant <paul@xen.org>
Jan Beulich [Tue, 20 Oct 2020 12:22:26 +0000 (14:22 +0200)]
AMD/IOMMU: convert amd_iommu_pte from struct to union
This is to add a "raw" counterpart to the bitfield equivalent. Take the
opportunity and
- convert fields to bool / unsigned int,
- drop the naming of the reserved field,
- shorten the names of the ignored ones.
This is part of XSA-347.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Paul Durrant <paul@xen.org>
Jan Beulich [Tue, 20 Oct 2020 12:21:32 +0000 (14:21 +0200)]
IOMMU: hold page ref until after deferred TLB flush
When moving around a page via XENMAPSPACE_gmfn_range, deferring the TLB
flush for the "from" GFN range requires that the page remains allocated
to the guest until the TLB flush has actually occurred. Otherwise a
parallel hypercall to remove the page would only flush the TLB for the
GFN it has been moved to, but not the one is was mapped at originally.
This is part of XSA-346.
Fixes: cf95b2a9fd5a ("iommu: Introduce per cpu flag (iommu_dont_flush_iotlb) to avoid unnecessary iotlb... ") Reported-by: Julien Grall <jgrall@amazon.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Julien Grall <jgrall@amazon.com>