Andrew Cooper [Mon, 19 Oct 2020 14:51:22 +0000 (15:51 +0100)]
x86/pv: Flush TLB in response to paging structure changes
With MMU_UPDATE, a PV guest can make changes to higher level pagetables. This
is safe from Xen's point of view (as the update only affects guest mappings),
and the guest is required to flush (if necessary) after making updates.
However, Xen's use of linear pagetables (UPDATE_VA_MAPPING, GNTTABOP_map,
writeable pagetables, etc.) is an implementation detail outside of the
API/ABI.
Changes in the paging structure require invalidations in the linear pagetable
range for subsequent accesses into the linear pagetables to access non-stale
mappings. Xen must provide suitable flushing to prevent intermixed guest
actions from accidentally accessing/modifying the wrong pagetable.
For all L2 and higher modifications, flush the TLB. PV guests cannot create
L2 or higher entries with the Global bit set, so no mappings established in
the linear range can be global. (This could in principle be an order 39 flush
starting at LINEAR_PT_VIRT_START, but no such mechanism exists in practice.)
Express the necessary flushes as a set of booleans which accumulate across the
operation. Comment the flushing logic extensively.
This is XSA-286.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit 16a20963b3209788f2c0d3a3eebb7d92f03f5883)
Andrew Cooper [Thu, 22 Oct 2020 10:28:58 +0000 (11:28 +0100)]
x86/pv: Drop FLUSH_TLB_GLOBAL in do_mmu_update() for XPTI
c/s 9d1d31ad9498 "x86: slightly reduce Meltdown band-aid overhead" removed the
use of Global TLB flushes on the Xen entry path, but added a FLUSH_TLB_GLOBAL
to the L4 path in do_mmu_update().
However, this was unnecessary.
It is the guests responsibility to perform appropriate TLB flushing if the L4
modification altered an established mapping in a flush-relevant way. In this
case, an MMUEXT_OP hypercall will follow. The case which Xen needs to cover
is when new mappings are created, and the resync on the exit-to-guest path
covers this correctly.
There is a corner case with multiple vCPUs in hypercalls at the same time,
which 9d1d31ad9498 changed, and this patch changes back to its original XPTI
behaviour.
Architecturally, established TLB entries can continue to be used until the
broadcast flush has completed. Therefore, even with concurrent hypercalls,
the guest cannot depend on older mappings not being used until an MMUEXT_OP
hypercall completes. Xen's implementation of guest-initiated flushes will
take correct effect on top of an in-progress hypercall, picking up new mapping
setting before the other vCPU's MMUEXT_OP completes.
Note: The correctness of this change is not impacted by whether XPTI uses
global mappings or not. Correctness there depends on the behaviour of Xen on
the entry/exit paths when switching two/from the XPTI "shadow" pagetables.
This is (not really) XSA-286 (but necessary to simplify the logic).
Fixes: 9d1d31ad9498 ("x86: slightly reduce Meltdown band-aid overhead") Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit 055e1c3a3d95b1e753148369fbc4ba48782dd602)
Igor Druzhinin [Tue, 20 Oct 2020 13:06:49 +0000 (15:06 +0200)]
hvmloader: flip "ACPI data" to "ACPI NVS" type for ACPI table region
ACPI specification contains statements describing memory marked with regular
"ACPI data" type as reclaimable by the guest. Although the guest shouldn't
really do it if it wants kexec or similar functionality to work, there
could still be ambiguities in treating these regions as potentially regular
RAM.
One such example is SeaBIOS which currently reports "ACPI data" regions as
RAM to the guest in its e801 call. Which it might have the right to do as any
user of this is expected to be ACPI unaware. But a QEMU bootloader later seems
to ignore that fact and is instead using e801 to find a place for initrd which
causes the tables to be erased. While arguably QEMU bootloader or SeaBIOS need
to be fixed / improved here, that is just one example of the potential problems
from using a reclaimable memory type.
Flip the type to "ACPI NVS" which doesn't have this ambiguity in it and is
described by the spec as non-reclaimable (so cannot ever be treated like RAM).
Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com>
master commit: de6d188a519f9e3b7a1acc7784adf4c243865f9a
master date: 2020-10-20 08:54:23 +0200
Chen Yu [Tue, 20 Oct 2020 13:06:24 +0000 (15:06 +0200)]
x86/mwait-idle: customize IceLake server support
On ICX platform, the C1E auto-promotion is enabled by default.
As a result, the CPU might fall into C1E more offen than previous
platforms. So disable C1E auto-promotion and expose C1E as a separate
idle state.
Beside C1 and C1E, the exit latency of C6 was measured
by a dedicated tool. However the exit latency(41us) exposed
by _CST is much smaller than the one we measured(128us). This
is probably due to the _CST uses the exit latency when woken
up from PC0+C6, rather than PC6+C6 when C6 was measured. Choose
the latter as we need the longest latency in theory.
Signed-off-by: Chen Yu <yu.c.chen@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
[Linux commit a472ad2bcea479ba068880125d7273fc95c14b70] Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com>
master commit: 44ac57af81ff8097e228895738b911ca819bda19
master date: 2020-10-15 12:29:11 +0200
Jan Beulich [Tue, 20 Oct 2020 13:05:57 +0000 (15:05 +0200)]
x86: fix resource leaks on arch_vcpu_create() error path
{hvm,pv}_vcpu_initialise() have always kind of been meant to be the
final possible source of errors in arch_vcpu_create(), hence not
requiring any unrolling of what they've done on the error path. (Of
course this may change once the various involved paths all have become
idempotent.)
But even beyond this aspect I think it is more logical to do policy
initialization ahead of the calling of these two functions, as they may
in principle want to access it.
Fixes: 4187f79dc718 ("x86/msr: introduce struct msr_vcpu_policy") Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 6a34e67c118408ebdd62bfa7be76598ca040f170
master date: 2020-10-14 14:03:38 +0200
Jan Beulich [Tue, 20 Oct 2020 13:05:02 +0000 (15:05 +0200)]
evtchn/fifo: use stable fields when recording "last queue" information
Both evtchn->priority and evtchn->notify_vcpu_id could change behind the
back of evtchn_fifo_set_pending(), as for it - in the case of
interdomain channels - only the remote side's per-channel lock is held.
Neither the queue's priority nor the vCPU's vcpu_id fields have similar
properties, so they seem better suited for the purpose. In particular
they reflect the respective evtchn fields' values at the time they were
used to determine queue and vCPU.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Julien Grall <jgrall@amazon.com> Reviewed-by: Paul Durrant <paul@xen.org>
master commit: 6f6f07b64cbe90e54f8e62b4d6f2404cf5306536
master date: 2020-10-02 08:37:35 +0200
Andrew Cooper [Tue, 20 Oct 2020 13:04:00 +0000 (15:04 +0200)]
x86/pv: Don't clobber NT on return-to-guest
A 64bit IRET can restore NT - the faulting case is when NT is set in the live
flags. This change had an unintended consequence of causing the NT flag to
spontaneously disappear from guest context whenever a interrupt/exception
occurred.
In combination with a SYSENTER which sets both TF and NT, Xen's handling of
the #DB exceptions clears NT before it is even recorded suitably in the guest
kernel's view of what userspace was doing.
Reported-by: Andy Lutomirski <luto@kernel.org> Fixes: 0e47f92b0 ("x86: force EFLAGS.IF on when exiting to PV guests") Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 5bcac985498ed83d89666959175ca9c9ed561ae1
master date: 2020-09-24 21:02:35 +0100
Jan Beulich [Tue, 20 Oct 2020 13:03:24 +0000 (15:03 +0200)]
AMD/IOMMU: ensure suitable ordering of DTE modifications
DMA and interrupt translation should be enabled only after other
applicable DTE fields have been written. Similarly when disabling
translation or when moving a device between domains, translation should
first be disabled, before other entry fields get modified. Note however
that the "moving" aspect doesn't apply to the interrupt remapping side,
as domain specifics are maintained in the IRTEs here, not the DTE. We
also never disable interrupt remapping once it got enabled for a device
(the respective argument passed is always the immutable iommu_intremap).
This is part of XSA-347.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Paul Durrant <paul@xen.org>
master commit: 0514a3a25fb9ebff5d75cc8f00a9229385300858
master date: 2020-10-20 14:23:12 +0200
Jan Beulich [Tue, 20 Oct 2020 13:03:04 +0000 (15:03 +0200)]
AMD/IOMMU: update live PTEs atomically
Updating a live PTE word by word allows the IOMMU to see a partially
updated entry. Construct the new entry fully in a local variable and
then write the new entry by a single insn.
This is part of XSA-347.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Paul Durrant <paul@xen.org>
master commit: 3b055121c5410e2c3105d6d06aa24ca0d58868cd
master date: 2020-10-20 14:22:52 +0200
Jan Beulich [Tue, 20 Oct 2020 13:02:36 +0000 (15:02 +0200)]
IOMMU: hold page ref until after deferred TLB flush
When moving around a page via XENMAPSPACE_gmfn_range, deferring the TLB
flush for the "from" GFN range requires that the page remains allocated
to the guest until the TLB flush has actually occurred. Otherwise a
parallel hypercall to remove the page would only flush the TLB for the
GFN it has been moved to, but not the one is was mapped at originally.
This is part of XSA-346.
Fixes: cf95b2a9fd5a ("iommu: Introduce per cpu flag (iommu_dont_flush_iotlb) to avoid unnecessary iotlb... ") Reported-by: Julien Grall <jgrall@amazon.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Julien Grall <jgrall@amazon.com>
master commit: 5777a3742d88ff1c0ebc626ceb4fd47f9b3dc6d5
master date: 2020-10-20 14:21:32 +0200
Jan Beulich [Tue, 20 Oct 2020 13:02:14 +0000 (15:02 +0200)]
IOMMU: suppress "iommu_dont_flush_iotlb" when about to free a page
Deferring flushes to a single, wide range one - as is done when
handling XENMAPSPACE_gmfn_range - is okay only as long as
pages don't get freed ahead of the eventual flush. While the only
function setting the flag (xenmem_add_to_physmap()) suggests by its name
that it's only mapping new entries, in reality the way
xenmem_add_to_physmap_one() works means an unmap would happen not only
for the page being moved (but not freed) but, if the destination GFN is
populated, also for the page being displaced from that GFN. Collapsing
the two flushes for this GFN into just one (end even more so deferring
it to a batched invocation) is not correct.
This is part of XSA-346.
Fixes: cf95b2a9fd5a ("iommu: Introduce per cpu flag (iommu_dont_flush_iotlb) to avoid unnecessary iotlb... ") Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Paul Durrant <paul@xen.org> Acked-by: Julien Grall <jgrall@amazon.com>
master commit: dea460d86957bf1425a8a1572626099ac3f165a8
master date: 2020-10-20 14:21:09 +0200
Hongyan Xia [Tue, 20 Oct 2020 13:01:56 +0000 (15:01 +0200)]
x86/mm: Prevent some races in hypervisor mapping updates
map_pages_to_xen will attempt to coalesce mappings into 2MiB and 1GiB
superpages if possible, to maximize TLB efficiency. This means both
replacing superpage entries with smaller entries, and replacing
smaller entries with superpages.
Unfortunately, while some potential races are handled correctly,
others are not. These include:
1. When one processor modifies a sub-superpage mapping while another
processor replaces the entire range with a superpage.
Take the following example:
Suppose L3[N] points to L2. And suppose we have two processors, A and
B.
* A walks the pagetables, get a pointer to L2.
* B replaces L3[N] with a 1GiB mapping.
* B Frees L2
* A writes L2[M] #
This is race exacerbated by the fact that virt_to_xen_l[21]e doesn't
handle higher-level superpages properly: If you call virt_xen_to_l2e
on a virtual address within an L3 superpage, you'll either hit a BUG()
(most likely), or get a pointer into the middle of a data page; same
with virt_xen_to_l1 on a virtual address within either an L3 or L2
superpage.
So take the following example:
* A reads pl3e and discovers it to point to an L2.
* B replaces L3[N] with a 1GiB mapping
* A calls virt_to_xen_l2e() and hits the BUG_ON() #
2. When two processors simultaneously try to replace a sub-superpage
mapping with a superpage mapping.
Take the following example:
Suppose L3[N] points to L2. And suppose we have two processors, A and B,
both trying to replace L3[N] with a superpage.
* A walks the pagetables, get a pointer to pl3e, and takes a copy ol3e pointing to L2.
* B walks the pagetables, gets a pointre to pl3e, and takes a copy ol3e pointing to L2.
* A writes the new value into L3[N]
* B writes the new value into L3[N]
* A recursively frees all the L1's under L2, then frees L2
* B recursively double-frees all the L1's under L2, then double-frees L2 #
Fix this by grabbing a lock for the entirety of the mapping update
operation.
Rather than grabbing map_pgdir_lock for the entire operation, however,
repurpose the PGT_locked bit from L3's page->type_info as a lock.
This means that rather than locking the entire address space, we
"only" lock a single 512GiB chunk of hypervisor address space at a
time.
There was a proposal for a lock-and-reverify approach, where we walk
the pagetables to the point where we decide what to do; then grab the
map_pgdir_lock, re-verify the information we collected without the
lock, and finally make the change (starting over again if anything had
changed). Without being able to guarantee that the L2 table wasn't
freed, however, that means every read would need to be considered
potentially unsafe. Thinking carefully about that is probably
something that wants to be done on public, not under time pressure.
Jan Beulich [Fri, 2 Oct 2020 10:36:11 +0000 (12:36 +0200)]
evtchn/Flask: pre-allocate node on send path
xmalloc() & Co may not be called with IRQs off, or else check_lock()
will have its assertion trigger about locks getting acquired
inconsistently. Re-arranging the locking in evtchn_send() doesn't seem
very reasonable, especially since the per-channel lock was introduced to
avoid acquiring the per-domain event lock on the send paths. Issue a
second call to xsm_evtchn_send() instead, before acquiring the lock, to
give XSM / Flask a chance to pre-allocate whatever it may need.
As these nodes are used merely for caching earlier decisions' results,
allocate just one node in AVC code despite two potentially being needed.
Things will merely be not as performant if a second allocation was
wanted, just like when the pre-allocation fails.
Fixes: c0ddc8634845 ("evtchn: convert per-channel lock to be IRQ-safe") Signed-off-by: Jan Beulich <jbeulich@suse.com> Tested-by: Jason Andryuk <jandryuk@gmail.com> Acked-by: Julien Grall <jgrall@amazon.com> Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
master commit: 52e1fc47abc3a0123d2b5bb7e9172e84fd571851
master date: 2020-10-02 08:36:21 +0200
Jan Beulich [Tue, 22 Sep 2020 15:42:27 +0000 (17:42 +0200)]
x86/HVM: more consistently set I/O completion
Doing this just in hvm_emulate_one_insn() is not enough.
hvm_ud_intercept() and hvm_emulate_one_vm_event() can get invoked for
insns requiring one or more continuations, and at least in principle
hvm_emulate_one_mmio() could, too. Without proper setting of the field,
handle_hvm_io_completion() will do nothing completion-wise, and in
particular the missing re-invocation of the insn emulation paths will
lead to emulation caching not getting disabled in due course, causing
the ASSERT() in {svm,vmx}_vmenter_helper() to trigger.
Reported-by: Don Slutz <don.slutz@gmail.com>
Similar considerations go for the clearing of vio->mmio_access, which
gets moved as well.
Additionally all updating of vio->mmio_* now gets done dependent upon
the new completion value, rather than hvm_ioreq_needs_completion()'s
return value. This is because it is the completion chosen which controls
what path will be taken when handling the completion, not the simple
boolean return value. In particular, PIO completion doesn't involve
going through the insn emulator, and hence emulator state ought to get
cleared early (or it won't get cleared at all).
The new logic, besides allowing for a caller override for the
continuation type to be set (for VMX real mode emulation), will also
avoid setting an MMIO completion when a simpler PIO one will do. This
is a minor optimization only as a side effect - the behavior is strictly
needed at least for hvm_ud_intercept(), as only memory accesses can
successfully complete through handle_mmio(). Care of course needs to be
taken to correctly deal with "mixed" insns (doing both MMIO and PIO at
the same time, i.e. INS/OUTS). For this, hvmemul_validate() now latches
whether the insn being emulated is a memory access, as this information
is no longer easily available at the point where we want to consume it.
Note that the presence of non-NULL .validate fields in the two ops
structures in hvm_emulate_one_mmio() was really necessary even before
the changes here: Without this, passing non-NULL as middle argument to
hvm_emulate_init_once() is meaningless.
The restrictions on when the #UD intercept gets actually enabled are why
it was decided that this is not a security issue:
- the "hvm_fep" option to enable its use is a debugging option only,
- for the cross-vendor case is considered experimental, even if
unfortunately SUPPORT.md doesn't have an explicit statement about
this.
The other two affected functions are
- hvm_emulate_one_vm_event(), used for introspection,
- hvm_emulate_one_mmio(), used for Dom0 only,
which aren't qualifying this as needing an XSA either.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Tested-by: Don Slutz <don.slutz@gmail.com> Reviewed-by: Paul Durrant <paul@xen.org> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
master commit: b807cfe954b8d0d8852398b4c8a586d95d69a342
master date: 2020-09-15 10:19:33 +0200
Igor Druzhinin [Tue, 22 Sep 2020 15:42:07 +0000 (17:42 +0200)]
hvmloader: indicate ACPI tables with "ACPI data" type in e820
Guest kernel does need to know in some cases where the tables are located
to treat these regions properly. One example is kexec process where
the first kernel needs to pass ACPI region locations to the second
kernel which is now a requirement in Linux after 02a3e3cdb7f12 ("x86/boot:
Parse SRAT table and count immovable memory regions") in order for kexec
transition to actually work.
That commit introduced accesses to XSDT and SRAT while the second kernel
is still using kexec transition tables. The transition tables do not have
e820 "reserved" regions mapped where those tables are located currently
in a Xen guest. Instead "ACPI data" regions are mapped with the transition
tables that was introduced by the following commit 6bbeb276b7 ("x86/kexec:
Add the EFI system tables and ACPI tables to the ident map").
Reserve 1MB (out of 16MB currently available) right after ACPI info page for
ACPI tables exclusively but populate this region on demand and only indicate
populated memory as "ACPI data" since according to ACPI spec that memory is
reclaimable by the guest if necessary. That is close to how we treat
the same ACPI data in PVH guests. 1MB should be enough for now but could be
later extended if required.
Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 8efa46516c5f4cf185c8df179812c185d3c27eb6
master date: 2020-09-09 17:56:13 +0200
Jan Beulich [Tue, 22 Sep 2020 15:12:09 +0000 (17:12 +0200)]
evtchn: arrange for preemption in evtchn_reset()
Like for evtchn_destroy() looping over all possible event channels to
close them can take a significant amount of time. Unlike done there, we
can't alter domain properties (i.e. d->valid_evtchns) here. Borrow, in a
lightweight form, the paging domctl continuation concept, redirecting
the continuations to different sub-ops. Just like there this is to be
able to allow for predictable overall results of the involved sub-ops:
Racing requests should either complete or be refused.
Note that a domain can't interfere with an already started (by a remote
domain) reset, due to being paused. It can prevent a remote reset from
happening by leaving a reset unfinished, but that's only going to affect
itself.
Jan Beulich [Tue, 22 Sep 2020 15:11:33 +0000 (17:11 +0200)]
evtchn: arrange for preemption in evtchn_destroy()
Especially closing of fully established interdomain channels can take
quite some time, due to the locking involved. Therefore we shouldn't
assume we can clean up still active ports all in one go. Besides adding
the necessary preemption check, also avoid pointlessly starting from
(or now really ending at) 0; 1 is the lowest numbered port which may
need closing.
Since we're now reducing ->valid_evtchns, free_xen_event_channel(),
and (at least to be on the safe side) notify_via_xen_event_channel()
need to cope with attempts to close / unbind from / send through already
closed (and no longer valid, as per port_is_valid()) ports.
Jan Beulich [Tue, 22 Sep 2020 15:11:01 +0000 (17:11 +0200)]
evtchn: address races with evtchn_reset()
Neither d->evtchn_port_ops nor max_evtchns(d) may be used in an entirely
lock-less manner, as both may change by a racing evtchn_reset(). In the
common case, at least one of the domain's event lock or the per-channel
lock needs to be held. In the specific case of the inter-domain sending
by evtchn_send() and notify_via_xen_event_channel() holding the other
side's per-channel lock is sufficient, as the channel can't change state
without both per-channel locks held. Without such a channel changing
state, evtchn_reset() can't complete successfully.
Lock-free accesses continue to be permitted for the shim (calling some
otherwise internal event channel functions), as this happens while the
domain is in effectively single-threaded mode. Special care also needs
taking for the shim's marking of in-use ports as ECS_RESERVED (allowing
use of such ports in the shim case is okay because switching into and
hence also out of FIFO mode is impossihble there).
As a side effect, certain operations on Xen bound event channels which
were mistakenly permitted so far (e.g. unmask or poll) will be refused
now.
Jan Beulich [Tue, 22 Sep 2020 15:09:59 +0000 (17:09 +0200)]
evtchn: evtchn_reset() shouldn't succeed with still-open ports
While the function closes all ports, it does so without holding any
lock, and hence racing requests may be issued causing new ports to get
opened. This would have been problematic in particular if such a newly
opened port had a port number above the new implementation limit (i.e.
when switching from FIFO to 2-level) after the reset, as prior to
"evtchn: relax port_is_valid()" this could have led to e.g.
evtchn_close()'s "BUG_ON(!port_is_valid(d2, port2))" to trigger.
Introduce a counter of active ports and check that it's (still) no
larger then the number of Xen internally used ones after obtaining the
necessary lock in evtchn_reset().
As to the access model of the new {active,xen}_evtchns fields - while
all writes get done using write_atomic(), reads ought to use
read_atomic() only when outside of a suitably locked region.
Note that as of now evtchn_bind_virq() and evtchn_bind_ipi() don't have
a need to call check_free_port().
Jan Beulich [Tue, 22 Sep 2020 15:09:25 +0000 (17:09 +0200)]
evtchn/x86: enforce correct upper limit for 32-bit guests
The recording of d->max_evtchns in evtchn_2l_init(), in particular with
the limited set of callers of the function, is insufficient. Neither for
PV nor for HVM guests the bitness is known at domain_create() time, yet
the upper bound in 2-level mode depends upon guest bitness. Recording
too high a limit "allows" x86 32-bit domains to open not properly usable
event channels, management of which (inside Xen) would then result in
corruption of the shared info and vCPU info structures.
Keep the upper limit dynamic for the 2-level case, introducing a helper
function to retrieve the effective limit. This helper is now supposed to
be private to the event channel code. The used in do_poll() and
domain_dump_evtchn_info() weren't consistent with port uses elsewhere
and hence get switched to port_is_valid().
Furthermore FIFO mode's setup_ports() gets adjusted to loop only up to
the prior ABI limit, rather than all the way up to the new one.
Finally a word on the change to do_poll(): Accessing ->max_evtchns
without holding a suitable lock was never safe, as it as well as
->evtchn_port_ops may change behind do_poll()'s back. Using
port_is_valid() instead widens some the window for potential abuse,
until we've dealt with the race altogether (see XSA-343).
This is XSA-342.
Reported-by: Julien Grall <jgrall@amazon.com> Fixes: 48974e6ce52e ("evtchn: use a per-domain variable for the max number of event channels") Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> Reviewed-by: Julien Grall <jgrall@amazon.com>
xen/evtchn: Add missing barriers when accessing/allocating an event channel
While the allocation of a bucket is always performed with the per-domain
lock, the bucket may be accessed without the lock taken (for instance, see
evtchn_send()).
Instead such sites relies on port_is_valid() to return a non-zero value
when the port has a struct evtchn associated to it. The function will
mostly check whether the port is less than d->valid_evtchns as all the
buckets/event channels should be allocated up to that point.
Unfortunately a compiler is free to re-order the assignment in
evtchn_allocate_port() so it would be possible to have d->valid_evtchns
updated before the new bucket has finish to allocate.
Additionally on Arm, even if this was compiled "correctly", the
processor can still re-order the memory access.
Add a write memory barrier in the allocation side and a read memory
barrier when the port is valid to prevent any re-ordering issue.
Andrew Cooper [Tue, 22 Sep 2020 15:07:38 +0000 (17:07 +0200)]
x86/pv: Avoid double exception injection
There is at least one path (SYSENTER with NT set, Xen converts to #GP) which
ends up injecting the #GP fault twice, first in compat_sysenter(), and then a
second time in compat_test_all_events(), due to the stale TBF_EXCEPTION left
in TRAPBOUNCE_flags.
The guest kernel sees the second fault first, which is a kernel level #GP
pointing at the head of the #GP handler, and is therefore a userspace
trigger-able DoS.
This particular bug has bitten us several times before, so rearrange
{compat_,}create_bounce_frame() to clobber TRAPBOUNCE on success, rather than
leaving this task to one area of code which isn't used uniformly.
Other scenarios which might result in a double injection (e.g. two calls
directly to compat_create_bounce_frame) will now crash the guest, which is far
more obvious than letting the kernel run with corrupt state.
This is XSA-339
Fixes: fdac9515607b ("x86: clear EFLAGS.NT in SYSENTER entry path") Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Jan Beulich [Tue, 22 Sep 2020 15:06:59 +0000 (17:06 +0200)]
evtchn: relax port_is_valid()
To avoid ports potentially becoming invalid behind the back of certain
other functions (due to ->max_evtchn shrinking) because of
- a guest invoking evtchn_reset() and from a 2nd vCPU opening new
channels in parallel (see also XSA-343),
- alloc_unbound_xen_event_channel() produced channels living above the
2-level range (see also XSA-342),
drop the max_evtchns check from port_is_valid(). For a port for which
the function once returned "true", the returned value may not turn into
"false" later on. The function's result may only depend on bounds which
can only ever grow (which is the case for d->valid_evtchns).
This also eliminates a false sense of safety, utilized by some of the
users (see again XSA-343): Without a suitable lock held, d->max_evtchns
may change at any time, and hence deducing that certain other operations
are safe when port_is_valid() returned true is not legitimate. The
opportunities to abuse this may get widened by the change here
(depending on guest and host configuration), but will be taken care of
by the other XSA.
This is XSA-338.
Fixes: 48974e6ce52e ("evtchn: use a per-domain variable for the max number of event channels") Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> Reviewed-by: Julien Grall <jgrall@amazon.com>
Jan Beulich [Tue, 22 Sep 2020 15:06:34 +0000 (17:06 +0200)]
x86/MSI-X: restrict reading of table/PBA bases from BARs
When assigned to less trusted or un-trusted guests, devices may change
state behind our backs (they may e.g. get reset by means we may not know
about). Therefore we should avoid reading BARs from hardware once a
device is no longer owned by Dom0. Furthermore when we can't read a BAR,
or when we read zero, we shouldn't instead use the caller provided
address unless that caller can be trusted.
Re-arrange the logic in msix_capability_init() such that only Dom0 (and
only if the device isn't DomU-owned yet) or calls through
PHYSDEVOP_prepare_msix will actually result in the reading of the
respective BAR register(s). Additionally do so only as long as in-use
table entries are known (note that invocation of PHYSDEVOP_prepare_msix
counts as a "pseudo" entry). In all other uses the value already
recorded will get used instead.
Clear the recorded values in _pci_cleanup_msix() as well as on the one
affected error path. (Adjust this error path to also avoid blindly
disabling MSI-X when it was enabled on entry to the function.)
While moving around variable declarations (in many cases to reduce their
scopes), also adjust some of their types.
This is part of XSA-337.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
It's safer and faster to just use the cached last written
(untranslated) MSI message stored in msi_desc for the single user that
calls read_msi_msg.
This also prevents relying on the data read from the device MSI
registers in order to figure out the index into the IOMMU interrupt
remapping table, which is not safe.
This is part of XSA-337.
Reported-by: Andrew Cooper <andrew.cooper3@citrix.com> Requested-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
x86/vpt: fix race when migrating timers between vCPUs
The current vPT code will migrate the emulated timers between vCPUs
(change the pt->vcpu field) while just holding the destination lock,
either from create_periodic_time or pt_adjust_global_vcpu_target if
the global target is adjusted. Changing the periodic_timer vCPU field
in this way creates a race where a third party could grab the lock in
the unlocked region of pt_adjust_global_vcpu_target (or before
create_periodic_time performs the vcpu change) and then release the
lock from a different vCPU, creating a locking imbalance.
Introduce a per-domain rwlock in order to protect periodic_time
migration between vCPU lists. Taking the lock in read mode prevents
any timer from being migrated to a different vCPU, while taking it in
write mode allows performing migration of timers across vCPUs. The
per-vcpu locks are still used to protect all the other fields from the
periodic_timer struct.
Note that such migration shouldn't happen frequently, and hence
there's no performance drop as a result of such locking.
This is XSA-336.
Reported-by: Igor Druzhinin <igor.druzhinin@citrix.com> Tested-by: Igor Druzhinin <igor.druzhinin@citrix.com> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Andrew Cooper [Tue, 22 Sep 2020 15:04:58 +0000 (17:04 +0200)]
xen/memory: Don't skip the RCU unlock path in acquire_resource()
In the case that an HVM Stubdomain makes an XENMEM_acquire_resource hypercall,
the FIXME path will bypass rcu_unlock_domain() on the way out of the function.
Move the check to the start of the function. This does change the behaviour
of the get-size path for HVM Stubdomains, but that functionality is currently
broken and unused anyway, as well as being quite useless to entities which
can't actually map the resource anyway.
This is XSA-334.
Fixes: 83fa6552ce ("common: add a new mappable resource type: XENMEM_resource_grant_table") Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Andrew Cooper [Tue, 22 Sep 2020 15:04:22 +0000 (17:04 +0200)]
x86/pv: Handle the Intel-specific MSR_MISC_ENABLE correctly
This MSR doesn't exist on AMD hardware, and switching away from the safe
functions in the common MSR path was an erroneous change.
Partially revert the change.
This is XSA-333.
Fixes: 4fdc932b3cc ("x86/Intel: drop another 32-bit leftover") Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Wei Liu <wl@xen.org>
xen/arm: cmpxchg: Add missing memory barriers in __cmpxchg_mb_timeout()
The function __cmpxchg_mb_timeout() was intended to have the same
semantics as __cmpxchg_mb(). Unfortunately, the memory barriers were
not added when first implemented.
There is no known issue with the existing callers, but the barriers are
added given this is the expected semantics in Xen.
The issue was introduced by XSA-295.
Backport: 4.8+ Fixes: 86b0bc958373 ("xen/arm: cmpxchg: Provide a new helper that can timeout") Signed-off-by: Julien Grall <jgrall@amazon.com> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
(cherry picked from commit d501ef90ae7f2a79130ea89acb3d6d1792972934)
Wei Chen [Fri, 28 Aug 2020 02:34:03 +0000 (02:34 +0000)]
xen/arm: Missing N1/A76/A75 FP registers in vCPU context switch
Xen has cpu_has_fp/cpu_has_simd to detect whether the CPU supports
FP/SIMD or not. But currently, these two MACROs only consider value 0
of ID_AA64PFR0_EL1.FP/SIMD as FP/SIMD features enabled. But for CPUs
that support FP/SIMD and half-precision floating-point arithmetic, the
ID_AA64PFR0_EL1.FP/SIMD are 1 (see Arm ARM DDI0487F.b, D13.2.64).
For these CPUs, xen will treat them as no FP/SIMD support, the
vfp_save/restore_state will not take effect.
From the TRM documents of Cortex-A75/A76/N1, we know these CPUs support
basic Advanced SIMD/FP and half-precision floating-point arithmetic. In
this case, on N1/A76/A75 platforms, Xen will always miss the floating
pointer registers save/restore. If different vCPUs are running on the
same pCPU, the floating pointer registers will be corrupted randomly.
Enable CPU erratum of Speculative AT on the Neoverse N1 processor
versions r0p0 to r2p0.
Also Fix Cortex A76 Erratum string which had a wrong errata number.
Andrew Cooper [Fri, 11 Sep 2020 13:01:58 +0000 (15:01 +0200)]
x86/pv: Rewrite segment context switching from scratch
There are multiple bugs with the existing implementation.
On AMD CPUs prior to Zen2, loading a NUL segment selector doesn't clear the
segment base, which is a problem for 64bit code which typically expects to use
a NUL %fs/%gs selector.
On a context switch from any PV vcpu, to a 64bit PV vcpu with an %fs/%gs
selector which faults, the fixup logic loads NUL, and the guest is entered at
the failsafe callback with the stale base.
Alternatively, a PV context switch sequence of 64 (NUL, non-zero base) =>
32 (NUL) => 64 (NUL, zero base) will similarly cause Xen to enter the guest
with a stale base.
Both of these corner cases manifest as state corruption in the final vcpu.
However, damage is limited to to 64bit code expecting to use Thread Local
Storage with a base pointer of 0, which doesn't occur by default.
The context switch logic is extremely complicated, and is attempting to
optimise away loading a NUL selector (which is fast), or writing a 64bit base
of 0 (which is rare). Furthermore, it fails to respect Linux's ABI with
userspace, which manifests as userspace state corruption as far as Linux is
concerned.
Always restore all selector and base state, in all cases.
Leave a large comment explaining hardware behaviour, and the new ABI
expectations. Update the comments in the public headers.
Drop all "segment preloading" to handle the AMD corner case. It was never
anything but a waste of time for %ds/%es, and isn't needed now that %fs/%gs
bases are unconditionally written for 64bit PV guests. In load_segments(),
store the result of is_pv_32bit_vcpu() as it is an expensive predicate now,
and not used in a way which impacts speculative safety.
Reported-by: Andy Lutomirski <luto@kernel.org> Reported-by: Sarah Newman <srn@prgmr.com> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
x86/pv: Fix assertions in svm_load_segs()
OSSTest has shown an assertion failure:
http://logs.test-lab.xenproject.org/osstest/logs/153906/test-xtf-amd64-amd64-1/serial-rimava1.log
This is because we pass a non-NUL selector into svm_load_segs(), which is
something we must not do, as this path does not load the attributes/limits
from the GDT/LDT.
Drop the {fs,gs}_sel parameters from svm_load_segs() and use 0 instead. This
is acceptable even for non-zero NUL segments, as it is how the IRET
instruction behaves in all CPUs.
Only use the svm_load_segs() path when both FS and GS are NUL, which is the
common case when scheduling a 64bit vcpu with 64bit userspace in context.
Fixes: ad0fd291c5 ("x86/pv: Rewrite segment context switching from scratch") Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: ad0fd291c5e79191c2e3c70e43dded569f11a450
master date: 2020-09-07 11:32:34 +0100
master commit: 1e2d3be2e516e6f415ca6029f651b76a8563a27c
master date: 2020-09-08 16:46:31 +0100
Andrew Cooper [Fri, 11 Sep 2020 13:00:59 +0000 (15:00 +0200)]
x86/pv: Fix consistency of 64bit segment bases
The comments in save_segments(), _toggle_guest_pt() and write_cr() are false.
The %fs and %gs bases can be updated at any time by the guest.
As a consequence, Xen's fs_base/etc tracking state is always stale when the
vcpu is in context, and must not be used to complete MSR_{FS,GS}_BASE reads, etc.
In particular, a sequence such as:
wrmsr(MSR_FS_BASE, 0x1ull << 32);
write_fs(__USER_DS);
base = rdmsr(MSR_FS_BASE);
will return the stale base, not the new base. This may cause guest a guest
kernel's context switching of userspace to malfunction.
Therefore:
* Update save_segments(), _toggle_guest_pt() and read_msr() to always read
the segment bases from hardware.
* Update write_cr(), write_msr() and do_set_segment_base() to not not waste
time caching data which is instantly going to become stale again.
* Provide comments explaining when the tracking state is and isn't stale.
This bug has been present for 14 years, but several bugfixes since have built
on and extended the original flawed logic.
Fixes: ba9adb737ba ("Apply stricter checking to RDMSR/WRMSR emulations.") Fixes: c42494acb2f ("x86: fix FS/GS base handling when using the fsgsbase feature")
Fixed: eccc170053e ("x86/pv: Don't have %cr4.fsgsbase active behind a guest kernels back") Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: a5eaac9245f4f382a3cd0e9710e9d1cba7db20e4
master date: 2020-09-07 11:32:34 +0100
Andrew Cooper [Fri, 11 Sep 2020 13:00:21 +0000 (15:00 +0200)]
x86/pv: Fix multiple bugs with SEGBASE_GS_USER_SEL
The logic takes the segment selector unmodified from guest context. This
allowed the guest to load DPL0 descriptors into %gs. Fix up the RPL for
non-NUL selectors to be 3.
Xen's context switch logic skips saving the inactive %gs base, as it cannot be
modified by the guest behind Xen's back. This depends on Xen caching updates
to the inactive base, which is was missing from this path.
The consequence is that, following SEGBASE_GS_USER_SEL, the next context
switch will restore the stale inactive %gs base, and corrupt vcpu state.
Rework the hypercall to update the cached idea of gs_base_user, and fix the
behaviour in the case of the AMD NUL selector bug to always zero the segment
base.
Reported-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: afe018e041ec112d90a8b4e6ed607d22aa06f280
master date: 2020-08-31 14:21:46 +0100
Andrew Cooper [Fri, 11 Sep 2020 12:59:48 +0000 (14:59 +0200)]
x86/intel: Expose MSR_ARCH_CAPS to dom0
The overhead of (the lack of) MDS_NO alone has been measured at 30% on some
workloads. While we're not in a position yet to offer MSR_ARCH_CAPS generally
to guests, dom0 doesn't migrate, so we can pass a subset of hardware values
straight through.
This will cause PVH dom0's not to use KPTI by default, and all dom0's not to
use VERW flushing by default, and to use eIBRS in preference to retpoline on
recent Intel CPUs.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: e46474278a0e87e2b32ad5dd5fc20e8d2cb0688b
master date: 2020-08-31 13:43:26 +0100
Andrew Cooper [Fri, 11 Sep 2020 12:58:57 +0000 (14:58 +0200)]
x86: Begin to introduce support for MSR_ARCH_CAPS
... including serialisation/deserialisation logic and unit tests.
There is no current way to configure this MSR correctly for guests.
The toolstack side this logic needs building, which is far easier to
do with it in place.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: e32605b07ef2e01c9d05da9b2d5d7b8f9a5c7c1b
master date: 2020-08-27 12:48:46 +0100
Andrew Cooper [Fri, 11 Sep 2020 12:57:38 +0000 (14:57 +0200)]
x86/ioapic: Fix fixmap error path logic in ioapic_init_mappings()
In the case that bad_ioapic_register() fails, the current position of idx++
means that clear_fixmap(idx) will be called with the wrong index, and not
clean up the mapping just created.
Increment idx as part of the loop, rather than midway through the loop body.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: b4175c6693e089ffcd77cd1ea388e76e67d36d57
master date: 2020-08-05 17:35:11 +0100
Paul Durrant [Fri, 7 Aug 2020 15:33:22 +0000 (17:33 +0200)]
x86/hvm: set 'ipat' in EPT for special pages
All non-MMIO ranges (i.e those not mapping real device MMIO regions) that
map valid MFNs are normally marked MTRR_TYPE_WRBACK and 'ipat' is set. Hence
when PV drivers running in a guest populate the BAR space of the Xen Platform
PCI Device with pages such as the Shared Info page or Grant Table pages,
accesses to these pages will be cachable.
However, should IOMMU mappings be enabled be enabled for the guest then these
accesses become uncachable. This has a substantial negative effect on I/O
throughput of PV devices. Arguably PV drivers should bot be using BAR space to
host the Shared Info and Grant Table pages but it is currently commonplace for
them to do this and so this problem needs mitigation. Hence this patch makes
sure the 'ipat' bit is set for any special page regardless of where in GFN
space it is mapped.
NOTE: Clearly this mitigation only applies to Intel EPT. It is not obvious
that there is any similar mitigation possible for AMD NPT. Downstreams
such as Citrix XenServer have been carrying a patch similar to this for
several releases though.
Signed-off-by: Paul Durrant <pdurrant@amazon.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: ca24b2ffdbd9a25b2d313a547ccbe97baf3e5a8d
master date: 2020-07-31 17:42:47 +0200
Jan Beulich [Fri, 7 Aug 2020 15:32:42 +0000 (17:32 +0200)]
x86emul: replace UB shifts
Displacement values can be negative, hence we shouldn't left-shift them.
Or else we get
(XEN) UBSAN: Undefined behaviour in x86_emulate/x86_emulate.c:3482:55
(XEN) left shift of negative value -2
While auditing shifts, I noticed a pair of missing parentheses, which
also get added right here.
Reported-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com> Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>
x86emul: replace further UB shifts
I have no explanation how I managed to overlook these while putting
together what is now b6a907f8c83d ("x86emul: replace UB shifts").
Jan Beulich [Fri, 7 Aug 2020 15:31:16 +0000 (17:31 +0200)]
x86/S3: put data segment registers into known state upon resume
wakeup_32 sets %ds and %es to BOOT_DS, while leaving %fs at what
wakeup_start did set it to, and %gs at whatever BIOS did load into it.
All of this may end up confusing the first load_segments() to run on
the BSP after resume, in particular allowing a non-nul selector value
to be left in %fs.
Alongside %ss, also put all other data segment registers into the same
state that the boot and CPU bringup paths put them in.
Reported-by: M. Vefa Bicakci <m.v.b@runbox.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 55f8c389d4348cc517946fdcb10794112458e81e
master date: 2020-07-24 10:17:26 +0200
Andrew Cooper [Fri, 7 Aug 2020 15:30:35 +0000 (17:30 +0200)]
x86/spec-ctrl: Protect against CALL/JMP straight-line speculation
Some x86 CPUs speculatively execute beyond indirect CALL/JMP instructions.
With CONFIG_INDIRECT_THUNK / Retpolines, indirect CALL/JMP instructions are
converted to direct CALL/JMP's to __x86_indirect_thunk_REG(), leaving just a
handful of indirect JMPs implementing those stubs.
There is no architectrual execution beyond an indirect JMP, so use INT3 as
recommended by vendors to halt speculative execution. This is shorter than
LFENCE (which would also work fine), but also shows up in logs if we do
unexpected execute them.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 3b7dab93f2401b08c673244c9ae0f92e08bd03ba
master date: 2020-07-01 17:01:24 +0100
Roger Pau Monné [Fri, 7 Aug 2020 15:29:41 +0000 (17:29 +0200)]
mm: fix public declaration of struct xen_mem_acquire_resource
XENMEM_acquire_resource and it's related structure is currently inside
a __XEN__ or __XEN_TOOLS__ guarded section to limit it's scope to the
hypervisor or the toolstack only. This is wrong as the hypercall is
already being used by the Linux kernel at least, and as such needs to
be public.
Also switch the usage of uint64_aligned_t to plain uint64_t, as
uint64_aligned_t is only to be used by the toolstack. Doing such
change will reduce the size of the structure on 32bit x86 by 4bytes,
since there will be no padding added after the frame_list handle.
This is fine, as users of the previous layout will allocate 4bytes of
padding that won't be read by Xen, and users of the new layout won't
allocate those, which is also fine since Xen won't try to access them.
Note that the structure already has compat handling, and such handling
will take care of copying the right size (ie: minus the padding) when
called from a 32bit x86 context. This is true for the compat code both
before and after this patch, since the structures in the memory.h
compat header are subject to a pragma pack(4), which already removed
the trailing padding that would otherwise be introduced by the
alignment of the frame field to 8 bytes.
Fixes: 3f8f12281dd20 ('x86/mm: add HYPERVISOR_memory_op to acquire guest resources') Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 0e2e54966af556f4047c1048855c4a071028a32d
master date: 2020-06-29 18:03:49 +0200
Andrew Cooper [Fri, 7 Aug 2020 15:28:48 +0000 (17:28 +0200)]
x86/msr: Disallow access to Processor Trace MSRs
We do not expose the feature to guests, so should disallow access to the
respective MSRs. For simplicity, drop the entire block of MSRs, not just the
subset which have been specified thus far.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Wei Liu <wl@xen.org> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: bcdfbb70fca579baa04f212c0936b77919bdae11
master date: 2020-06-26 16:34:02 +0100
Grzegorz Uriasz [Fri, 7 Aug 2020 15:28:06 +0000 (17:28 +0200)]
x86/acpi: use FADT flags to determine the PMTMR width
On some computers the bit width of the PM Timer as reported
by ACPI is 32 bits when in fact the FADT flags report correctly
that the timer is 24 bits wide. On affected machines such as the
ASUS FX504GM and never gaming laptops this results in the inability
to resume the machine from suspend. Without this patch suspend is
broken on affected machines and even if a machine manages to resume
correctly then the kernel time and xen timers are trashed.
Signed-off-by: Grzegorz Uriasz <gorbak25@gmail.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: f325d2477eef8229c47d97031d314629521c70ab
master date: 2020-06-25 09:11:09 +0200
Tamas K Lengyel [Fri, 7 Aug 2020 15:27:11 +0000 (17:27 +0200)]
x86/vmx: use P2M_ALLOC in vmx_load_pdptrs instead of P2M_UNSHARE
While forking VMs running a small RTOS system (Zephyr) a Xen crash has been
observed due to a mm-lock order violation while copying the HVM CPU context
from the parent. This issue has been identified to be due to
hap_update_paging_modes first getting a lock on the gfn using get_gfn. This
call also creates a shared entry in the fork's memory map for the cr3 gfn. The
function later calls hap_update_cr3 while holding the paging_lock, which
results in the lock-order violation in vmx_load_pdptrs when it tries to unshare
the above entry when it grabs the page with the P2M_UNSHARE flag set.
Since vmx_load_pdptrs only reads from the page its usage of P2M_UNSHARE was
unnecessary to start with. Using P2M_ALLOC is the appropriate flag to ensure
the p2m is properly populated.
Note that the lock order violation is avoided because before the paging_lock is
taken a lookup is performed with P2M_ALLOC that forks the page, thus the second
lookup in vmx_load_pdptrs succeeds without having to perform the fork. We keep
P2M_ALLOC in vmx_load_pdptrs because there are code-paths leading up to it
which don't take the paging_lock and that have no previous lookup. Currently no
other code-path exists leading there with the paging_lock taken, thus no
further adjustments are necessary.
Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
master commit: b67e859d0823f5b450e29379af9142d44a3ea370
master date: 2020-06-19 15:24:55 +0200
xen: Check the alignment of the offset pased via VCPUOP_register_vcpu_info
Currently a guest is able to register any guest physical address to use
for the vcpu_info structure as long as the structure can fits in the
rest of the frame.
This means a guest can provide an address that is not aligned to the
natural alignment of the structure.
On Arm 32-bit, unaligned access are completely forbidden by the
hypervisor. This will result to a data abort which is fatal.
On Arm 64-bit, unaligned access are only forbidden when used for atomic
access. As the structure contains fields (such as evtchn_pending_self)
that are updated using atomic operations, any unaligned access will be
fatal as well.
While the misalignment is only fatal on Arm, a generic check is added
as an x86 guest shouldn't sensibly pass an unaligned address (this
would result to a split lock).
x86/ept: flush cache when modifying PTEs and sharing page tables
Modifications made to the page tables by EPT code need to be written
to memory when the page tables are shared with the IOMMU, as Intel
IOMMUs can be non-coherent and thus require changes to be written to
memory in order to be visible to the IOMMU.
In order to achieve this make sure data is written back to memory
after writing an EPT entry when the recalc bit is not set in
atomic_write_ept_entry. If such bit is set, the entry will be
adjusted and atomic_write_ept_entry will be called a second time
without the recalc bit set. Note that when splitting a super page the
new tables resulting of the split should also be written back.
Failure to do so can allow devices behind the IOMMU access to the
stale super page, or cause coherency issues as changes made by the
processor to the page tables are not visible to the IOMMU.
This allows to remove the VT-d specific iommu_pte_flush helper, since
the cache write back is now performed by atomic_write_ept_entry, and
hence iommu_iotlb_flush can be used to flush the IOMMU TLB. The newly
used method (iommu_iotlb_flush) can result in less flushes, since it
might sometimes be called rightly with 0 flags, in which case it
becomes a no-op.
This is part of XSA-321.
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: c23274fd0412381bd75068ebc9f8f8c90a4be748
master date: 2020-07-07 14:40:11 +0200
Some VT-d IOMMUs are non-coherent, which requires a cache write back
in order for the changes made by the CPU to be visible to the IOMMU.
This cache write back was unconditionally done using clflush, but there are
other more efficient instructions to do so, hence implement support
for them using the alternative framework.
This is part of XSA-321.
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: a64ea16522a73a13a0d66cfa4b66a9d3b95dd9d6
master date: 2020-07-07 14:39:54 +0200
vtd: don't assume addresses are aligned in sync_cache
Current code in sync_cache assume that the address passed in is
aligned to a cache line size. Fix the code to support passing in
arbitrary addresses not necessarily aligned to a cache line size.
This is part of XSA-321.
Reported-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: b6d9398144f21718d25daaf8d72669a75592abc5
master date: 2020-07-07 14:39:05 +0200
The hook is only implemented for VT-d and it uses the already existing
iommu_sync_cache function present in VT-d code. The new hook is
added so that the cache can be flushed by code outside of VT-d when
using shared page tables.
Note that alloc_pgtable_maddr must use the now locally defined
sync_cache function, because IOMMU ops are not yet setup the first
time the function gets called during IOMMU initialization.
No functional change intended.
This is part of XSA-321.
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 91526b460e5009fc56edbd6809e66c327281faba
master date: 2020-07-07 14:38:34 +0200
Rename __iommu_flush_cache to iommu_sync_cache and remove
iommu_flush_cache_page. Also remove the iommu_flush_cache_entry
wrapper and just use iommu_sync_cache instead. Note the _entry suffix
was meaningless as the wrapper was already taking a size parameter in
bytes. While there also constify the addr parameter.
No functional change intended.
This is part of XSA-321.
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 62298825b9a44f45761acbd758138b5ba059ebd1
master date: 2020-07-07 14:38:13 +0200
Jan Beulich [Tue, 7 Jul 2020 13:10:34 +0000 (15:10 +0200)]
vtd: improve IOMMU TLB flush
Do not limit PSI flushes to order 0 pages, in order to avoid doing a
full TLB flush if the passed in page has an order greater than 0 and
is aligned. Should increase the performance of IOMMU TLB flushes when
dealing with page orders greater than 0.
This is part of XSA-321.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
master commit: 5fe515a0fede07543f2a3b049167b1fd8b873caf
master date: 2020-07-07 14:37:46 +0200
x86/ept: atomically modify entries in ept_next_level
ept_next_level was passing a live PTE pointer to ept_set_middle_entry,
which was then modified without taking into account that the PTE could
be part of a live EPT table. This wasn't a security issue because the
pages returned by p2m_alloc_ptp are zeroed, so adding such an entry
before actually initializing it didn't allow a guest to access
physical memory addresses it wasn't supposed to access.
This is part of XSA-328.
Reported-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: bc3d9f95d661372b059a5539ae6cb1e79435bb95
master date: 2020-07-07 14:37:12 +0200
Jan Beulich [Tue, 7 Jul 2020 13:09:50 +0000 (15:09 +0200)]
x86/EPT: ept_set_middle_entry() related adjustments
ept_split_super_page() wants to further modify the newly allocated
table, so have ept_set_middle_entry() return the mapped pointer rather
than tearing it down and then getting re-established right again.
Similarly ept_next_level() wants to hand back a mapped pointer of
the next level page, so re-use the one established by
ept_set_middle_entry() in case that path was taken.
Pull the setting of suppress_ve ahead of insertion into the higher level
table, and don't have ept_split_super_page() set the field a 2nd time.
This is part of XSA-328.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
master commit: 1104288186ee73a7f9bfa41cbaa5bb7611521028
master date: 2020-07-07 14:36:52 +0200
Jan Beulich [Tue, 7 Jul 2020 13:09:25 +0000 (15:09 +0200)]
x86/shadow: correct an inverted conditional in dirty VRAM tracking
This originally was "mfn_x(mfn) == INVALID_MFN". Make it like this
again, taking the opportunity to also drop the unnecessary nearby
braces.
This is XSA-319.
Fixes: 246a5a3377c2 ("xen: Use a typesafe to define INVALID_MFN") Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 23a216f99d40fbfbc2318ade89d8213eea6ba1f8
master date: 2020-07-07 14:36:24 +0200
xen/common: event_channel: Don't ignore error in get_free_port()
Currently, get_free_port() is assuming that the port has been allocated
when evtchn_allocate_port() is not return -EBUSY.
However, the function may return an error when:
- We exhausted all the event channels. This can happen if the limit
configured by the administrator for the guest ('max_event_channels'
in xl cfg) is higher than the ABI used by the guest. For instance,
if the guest is using 2L, the limit should not be higher than 4095.
- We cannot allocate memory (e.g Xen has not more memory).
Users of get_free_port() (such as EVTCHNOP_alloc_unbound) will validly
assuming the port was valid and will next call evtchn_from_port(). This
will result to a crash as the memory backing the event channel structure
is not present.
Fixes: 368ae9a05fe ("xen/pvshim: forward evtchn ops between L0 Xen and L2 DomU") Signed-off-by: Julien Grall <jgrall@amazon.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 2e9c2bc292231823a3a021d2e0a9f1956bf00b3c
master date: 2020-07-07 14:35:36 +0200
Jason Andryuk [Wed, 24 Jun 2020 15:17:38 +0000 (17:17 +0200)]
libacpi: widen TPM detection
The hardcoded tpm_signature is too restrictive to detect many TPMs. For
instance, it doesn't accept a QEMU emulated TPM (VID 0x1014 DID 0x0001).
Make the TPM detection match that in rombios which accepts a wider
range.
With this change, the TPM's TCPA ACPI table is generated and the guest
OS can automatically load the tpm_tis driver. It also allows seabios to
detect and use the TPM. However, seabios skips some TPM initialization
when running under Xen, so it will not populate any PCRs unless modified
to run the initialization under Xen.
Signed-off-by: Jason Andryuk <jandryuk@gmail.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: d3db7e043cddd7e939195e014241ce2c5d436179
master date: 2020-06-16 10:31:08 +0200
Paul Durrant [Wed, 24 Jun 2020 15:17:05 +0000 (17:17 +0200)]
ioreq: handle pending emulation racing with ioreq server destruction
When an emulation request is initiated in hvm_send_ioreq() the guest vcpu is
blocked on an event channel until that request is completed. If, however,
the emulator is killed whilst that emulation is pending then the ioreq
server may be destroyed. Thus when the vcpu is awoken the code in
handle_hvm_io_completion() will find no pending request to wait for, but will
leave the internal vcpu io_req.state set to IOREQ_READY and the vcpu shutdown
deferall flag in place (because hvm_io_assist() will never be called). The
emulation request is then completed anyway. This means that any subsequent call
to hvmemul_do_io() will find an unexpected value in io_req.state and will
return X86EMUL_UNHANDLEABLE, which in some cases will result in continuous
re-tries.
This patch fixes the issue by moving the setting of io_req.state and clearing
of shutdown deferral (as will as MSI-X write completion) out of hvm_io_assist()
and directly into handle_hvm_io_completion().
Reported-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com> Signed-off-by: Paul Durrant <pdurrant@amazon.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: f7039ee41b3d3448775a1623f230037fd0455104
master date: 2020-06-09 12:56:24 +0200
Jan Beulich [Wed, 24 Jun 2020 15:16:30 +0000 (17:16 +0200)]
x86/Intel: insert Ice Lake and Comet Lake model numbers
Both match prior generation processors as far as LBR and C-state MSRs
go (SDM rev 072) as well as applicability of the if_pschange_mc erratum
(recent spec updates).
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 1fe406685cb19e9544681c6243e7d376deb0297e
master date: 2020-06-09 12:55:53 +0200
Jan Beulich [Wed, 24 Jun 2020 15:15:53 +0000 (17:15 +0200)]
build: fix dependency tracking for preprocessed files
While the issue is more general, I noticed that asm-macros.i not getting
re-generated as needed. This was due to its .*.d file mentioning
asm-macros.o instead of asm-macros.i. Use -MQ here as well, and while at
it also use -MQ to avoid the somewhat fragile sed-ary on the *.lds
dependency tracking files. While there, further avoid open-coding $(CPP)
and drop the bogus (Arm) / stale (x86) -Ui386.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Julien Grall <jgrall@amazon.com>
master commit: 75131ad75bb3c91717b5dfda6881e61c52bfd22e
master date: 2020-06-08 10:25:40 +0200
Igor Druzhinin [Wed, 24 Jun 2020 15:15:23 +0000 (17:15 +0200)]
x86/svm: do not try to handle recalc NPT faults immediately
A recalculation NPT fault doesn't always require additional handling
in hvm_hap_nested_page_fault(), moreover in general case if there is no
explicit handling done there - the fault is wrongly considered fatal.
This covers a specific case of migration with vGPU assigned which
uses direct MMIO mappings made by XEN_DOMCTL_memory_mapping hypercall:
at a moment log-dirty is enabled globally, recalculation is requested
for the whole guest memory including those mapped MMIO regions
which causes a page fault being raised at the first access to them;
but due to MMIO P2M type not having any explicit handling in
hvm_hap_nested_page_fault() a domain is erroneously crashed with unhandled
SVM violation.
Instead of trying to be opportunistic - use safer approach and handle
P2M recalculation in a separate NPT fault by attempting to retry after
making the necessary adjustments. This is aligned with Intel behavior
where there are separate VMEXITs for recalculation and EPT violations
(faults) and only faults are handled in hvm_hap_nested_page_fault().
Do it by also unifying do_recalc return code with Intel implementation
where returning 1 means P2M was actually changed.
Since there was no case previously where p2m_pt_handle_deferred_changes()
could return a positive value - it's safe to replace ">= 0" with just "== 0"
in VMEXIT_NPF handler. finish_type_change() is also not affected by the
change as being able to deal with >0 return value of p2m->recalc from
EPT implementation.
Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 51ca66c37371b10b378513af126646de22eddb17
master date: 2020-06-05 17:12:11 +0200
Roger Pau Monné [Wed, 24 Jun 2020 15:14:40 +0000 (17:14 +0200)]
build32: don't discard .shstrtab in linker script
LLVM linker doesn't support discarding .shstrtab, and complains with:
ld -melf_i386_fbsd -N -T build32.lds -o reloc.lnk reloc.o
ld: error: discarding .shstrtab section is not allowed
Add an explicit .shstrtab, .strtab and .symtab sections to the linker
script after the text section in order to make LLVM LD happy and match
the behavior of GNU LD.
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com>
master commit: 10d27b48b5b4dfbead2d9bf03290984bba4806e4
master date: 2020-06-02 13:37:53 +0200
Roger Pau Monné [Wed, 24 Jun 2020 15:14:11 +0000 (17:14 +0200)]
x86/mm: do not attempt to convert _PAGE_GNTTAB to a boolean
Clang 10 complains with:
mm.c:1239:10: error: converting the result of '<<' to a boolean always evaluates to true
[-Werror,-Wtautological-constant-compare]
if ( _PAGE_GNTTAB && (l1e_get_flags(l1e) & _PAGE_GNTTAB) &&
^
xen/include/asm/x86_64/page.h:161:25: note: expanded from macro '_PAGE_GNTTAB'
#define _PAGE_GNTTAB (1U<<22)
^
Remove the conversion of _PAGE_GNTTAB to a boolean and instead use a
preprocessor conditional to check if _PAGE_GNTTAB is defined.
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com>
master commit: 6eb61b1a9dfe23ca443f977799cafb22770708a0
master date: 2020-06-02 13:36:41 +0200
Jan Beulich [Wed, 24 Jun 2020 15:13:29 +0000 (17:13 +0200)]
x86emul: rework CMP and TEST emulation
Unlike similarly encoded insns these don't write their memory operands,
and hence x86_is_mem_write() should return false for them. However,
rather than adding special logic there, rework how their emulation gets
done, by making decoding attributes properly describe the r/o nature of
their memory operands:
- change the table entries for opcodes 0x38 and 0x39, with no other
adjustments to the attributes later on,
- for the other opcodes, leave the table entries as they are, and
override the attributes for the specific sub-cases (identified by
ModRM.reg).
For opcodes 0x38 and 0x39 the change of the table entries implies
changing the order of operands as passed to emulate_2op_SrcV(), hence
the splitting of the cases in the main switch().
Note how this also allows dropping custom LOCK prefix checks.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 20bc1b9cc99b70b17757e1903f629c7a26584790
master date: 2020-05-29 17:28:45 +0200
First of all explain in comments what the functions' purposes are. Then
make them actually match their comments.
Note that fc6fa977be54 ("x86emul: extend x86_insn_is_mem_write()
coverage") didn't actually fix the function's behavior for {,V}STMXCSR:
Both are covered by generic code higher up in the function, due to
x86_decode_twobyte() already doing suitable adjustments. And VSTMXCSR
wouldn't have been covered anyway without a further X86EMUL_OPC_VEX()
case label. Keep the inner case label in a comment for reference.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: e28d13eeb65c25c0bd56e8bfa83c7473047d778d
master date: 2020-05-29 17:28:04 +0200
Jan Beulich [Wed, 24 Jun 2020 15:11:44 +0000 (17:11 +0200)]
VT-x: extend LBR Broadwell errata coverage
For lbr_tsx_fixup_check() simply name a few more specific erratum
numbers.
For bdf93_fixup_check(), however, more models are affected. Oddly enough
despite being the same model and stepping, the erratum is listed for
Xeon E3 but not its Core counterpart. Apply the workaround uniformly,
and also for Xeon D, which only has the LBR-from one listed in its spec
update.
Seeing this broader applicability, rename anything BDF93-related to more
generic names.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
master commit: 724913de8ac8426d313a4645741d86c1169ae406
master date: 2020-05-28 12:03:25 +0200
Andrew Cooper [Wed, 24 Jun 2020 15:11:08 +0000 (17:11 +0200)]
x86/boot: Fix load_system_tables() to be NMI/#MC-safe
During boot, load_system_tables() is used in reinit_bsp_stack() to switch the
virtual addresses used from their .data/.bss alias, to their directmap alias.
The structure assignment is implemented as a memset() to zero first, then a
copy-in of the new data. This causes the NMI/#MC stack pointers to
transiently become 0, at a point where we may have an NMI watchdog running.
Rewrite the logic using a volatile tss pointer (equivalent to, but more
readable than, using ACCESS_ONCE() for all writes).
This does drop the zeroing side effect for holes in the structure, but the
backing memory for the TSS is fully zeroed anyway, and architecturally, they
are all reserved.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 9f3e9139fa6c3d620eb08dff927518fc88200b8d
master date: 2020-05-27 16:44:04 +0100
There have been reports of RDRAND issues after resuming from suspend on
some AMD family 15h and family 16h systems. This issue stems from a BIOS
not performing the proper steps during resume to ensure RDRAND continues
to function properly.
Update the CPU initialization to clear the RDRAND CPUID bit for any family
15h and 16h processor that supports RDRAND. If it is known that the family
15h or family 16h system does not have an RDRAND resume issue or that the
system will not be placed in suspend, the "cpuid=rdrand" kernel parameter
can be used to stop the clearing of the RDRAND CPUID bit.
Note, that clearing the RDRAND CPUID bit does not prevent a processor
that normally supports the RDRAND instruction from executing it. So any
code that determined the support based on family and model won't #UD.
Warn if no explicit choice was given on affected hardware.
Check RDRAND functions at boot as well as after S3 resume (the retry
limit chosen is entirely arbitrary).
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 93401e28a84b9dc5945f5d0bf5bce68e9d5ee121
master date: 2020-05-27 09:49:37 +0200
Roger Pau Monné [Wed, 24 Jun 2020 15:02:23 +0000 (17:02 +0200)]
x86/idle: prevent entering C6 with in service interrupts on Intel
Apply a workaround for Intel errata BDX99, CLX30, SKX100, CFW125,
BDF104, BDH85, BDM135, KWB131: "A Pending Fixed Interrupt May Be
Dispatched Before an Interrupt of The Same Priority Completes".
Apply the errata to all server and client models (big cores) from
Broadwell to Cascade Lake. The workaround is grouped together with the
existing fix for errata AAJ72, and the eoi from the function name is
removed.
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: fc44a7014cafe28b8c53eeaf6ac2a71f5bc8b815
master date: 2020-05-22 16:07:38 +0200
Roger Pau Monné [Wed, 24 Jun 2020 15:01:48 +0000 (17:01 +0200)]
x86/idle: rework C6 EOI workaround
Change the C6 EOI workaround (errata AAJ72) to use x86_match_cpu. Also
call the workaround from mwait_idle, previously it was only used by
the ACPI idle driver. Finally make sure the routine is called for all
states equal or greater than ACPI_STATE_C3, note that the ACPI driver
doesn't currently handle them, but the errata condition shouldn't be
limited by that.
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 5fef1fd713660406a6187ef352fbf79986abfe43
master date: 2020-05-20 12:48:37 +0200
Jan Beulich [Wed, 24 Jun 2020 15:01:10 +0000 (17:01 +0200)]
x86: determine MXCSR mask in all cases
For its use(s) by the emulator to be correct in all cases, the filling
of the variable needs to be independent of XSAVE availability. As
there's no suitable function in i387.c to put the logic in, keep it in
xstate_init(), arrange for the function to be called unconditionally,
and pull the logic ahead of all return paths there.
Fixes: 9a4496a35b20 ("x86emul: support {,V}{LD,ST}MXCSR") Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 2b532519d64e653a6bbfd9eefed6040a09c8876d
master date: 2020-05-18 17:18:56 +0200
Andrew Cooper [Wed, 24 Jun 2020 14:59:49 +0000 (16:59 +0200)]
x86/build: Unilaterally disable -fcf-protection
Xen doesn't support CET-IBT yet. At a minimum, logic is required to enable it
for supervisor use, but the livepatch functionality needs to learn not to
overwrite ENDBR64 instructions.
Furthermore, Ubuntu enables -fcf-protection by default, along with a buggy
version of GCC-9 which objects to it in combination with
-mindirect-branch=thunk-extern (Fixed in GCC 10, 9.4).
Various objects (Xen boot path, Rombios 32 stubs) require .text to be at the
beginning of the object. These paths explode when .note.gnu.properties gets
put ahead of .text and we end up executing the notes data.
Disable -fcf-protection for all embedded objects.
Reported-by: Jason Andryuk <jandryuk@gmail.com> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jason Andryuk <jandryuk@gmail.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Andrew Cooper [Wed, 24 Jun 2020 14:59:21 +0000 (16:59 +0200)]
x86/build: move -fno-asynchronous-unwind-tables into EMBEDDED_EXTRA_CFLAGS
Users of EMBEDDED_EXTRA_CFLAGS already use -fno-asynchronous-unwind-tables, or
ought to. This shrinks the size of the rombios 32bit stubs in guest memory.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Andrew Cooper [Wed, 24 Jun 2020 14:58:51 +0000 (16:58 +0200)]
x86/build32: Discard all orphaned sections
Linkers may put orphaned sections ahead of .text, which breaks the calling
requirements. A concrete example is Ubuntu's GCC-9 default of enabling
-fcf-protection which causes us to try and execute .note.gnu.properties during
Xen's boot.
Put .got.plt in its own section as it specifically needs preserving from the
linkers point of view, and discard everything else. This will hopefully be
more robust to other unexpected toolchain properties.
Fixes boot from an Ubuntu build of Xen.
Reported-by: Jason Andryuk <jandryuk@gmail.com> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Tested-by: Jason Andryuk <jandryuk@gmail.com> Acked-by: Jan Beulich <jbeulich@suse.com>
Andrew Cooper [Wed, 24 Jun 2020 14:58:22 +0000 (16:58 +0200)]
x86/guest: Fix assembler warnings with newer binutils
GAS of at least version 2.34 complains:
hypercall_page.S: Assembler messages:
hypercall_page.S:24: Warning: symbol 'HYPERCALL_set_trap_table' already has its type set
...
hypercall_page.S:71: Warning: symbol 'HYPERCALL_arch_7' already has its type set
which is because the whole page is declared as STT_OBJECT already. Rearrange
.set with respect to .type in DECLARE_HYPERCALL() so STT_FUNC is already in
place.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Jan Beulich [Wed, 24 Jun 2020 14:57:39 +0000 (16:57 +0200)]
x86/cpuidle: correct Cannon Lake residency MSRs
As per SDM rev 071 Cannon Lake has
- no CC3 residency MSR at 3FC,
- a CC1 residency MSR ar 660 (like various Atoms),
- a useless (always zero) CC3 residency MSR at 662.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 9ff09aefc46385dc04c38b6dd1f1ac25f784f482
master date: 2020-04-03 17:15:58 +0200
Andrew Cooper [Fri, 12 Jun 2020 17:32:27 +0000 (18:32 +0100)]
tools/libxl: Fix memory leak in libxl_cpuid_set()
xc_cpuid_set() returns allocated memory via cpuid_res, which libxl needs to
free() seeing as it discards the results.
This is logically a backport of c/s b91825f628 "tools/libxc: Drop
config_transformed parameter from xc_cpuid_set()" but rewritten as one caller
of xc_cpuid_set() does use returned values.
Andrew Cooper [Wed, 10 Jun 2020 17:57:00 +0000 (18:57 +0100)]
x86/spec-ctrl: Allow the RDRAND/RDSEED features to be hidden
RDRAND/RDSEED can be hidden using cpuid= to mitigate SRBDS if microcode
isn't available.
This is part of XSA-320 / CVE-2020-0543.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Julien Grall <jgrall@amazon.com>
(cherry picked from commit 7028534d8482d25860c4d1aa8e45f0b911abfc5a)
Andrew Cooper [Wed, 8 Jan 2020 19:47:46 +0000 (19:47 +0000)]
x86/spec-ctrl: Mitigate the Special Register Buffer Data Sampling sidechannel
See patch documentation and comments.
This is part of XSA-320 / CVE-2020-0543
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit 6a49b9a7920c82015381740905582b666160d955)
Andrew Cooper [Wed, 8 Jan 2020 19:47:46 +0000 (19:47 +0000)]
x86/spec-ctrl: CPUID/MSR definitions for Special Register Buffer Data Sampling
This is part of XSA-320 / CVE-2020-0543
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Wei Liu <wl@xen.org>
(cherry picked from commit caab85ab58c0cdf74ab070a5de5c4df89f509ff3)
Ashok Raj [Thu, 7 May 2020 12:58:16 +0000 (14:58 +0200)]
x86/ucode/intel: Writeback and invalidate caches before updating microcode
Updating microcode is less error prone when caches have been flushed and
depending on what exactly the microcode is updating. For example, some of the
issues around certain Broadwell parts can be addressed by doing a full cache
flush.
Signed-off-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
[Linux commit 91df9fdf51492aec9fed6b4cbd33160886740f47, ported to Xen] Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com>
master commit: 77c82949990edaf21130be842a289a7fb7a439e1
master date: 2020-05-05 20:18:19 +0100
Roger Pau Monné [Thu, 7 May 2020 12:56:56 +0000 (14:56 +0200)]
x86/hvm: simplify hvm_physdev_op allowance control
PVHv1 dom0 was given access to all PHYSDEVOP hypercalls, and such
restriction was not removed when PVHv1 code was removed. As a result
the switch in hvm_physdev_op was more complicated than required, and
relied on PVHv2 dom0 not having PIRQ support in order to prevent
access to some PV specific PHYSDEVOPs.
Fix this by moving the default case to the bottom of the switch, since
there's no need for any fall through now. Also remove the hardware
domain check, as all the not explicitly listed PHYSDEVOPs are
forbidden for HVM domains.
Finally tighten the condition to allow usage of
PHYSDEVOP_pci_mmcfg_reserved: apart from having vPCI enabled it should
only be used by the hardware domain. Note that the code in
do_physdev_op is already restricting the call to privileged domains
only, but it can be further restricted to the hardware domain only, as
other privileged domains don't have access to MMCFG regions anyway.
Overall no functional change should arise from this change.
Reported-by: Julien Grall <jgrall@amazon.com> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: a00e3737e085ebc1f313e36b188d4958e939e531
master date: 2020-05-05 09:52:28 +0200