Paul Durrant [Thu, 3 Dec 2020 08:45:00 +0000 (08:45 +0000)]
xl: introduce a 'xen-abi-features' option...
... to control which features of the Xen ABI are enabled in
'libxl_domain_build_info', and hence exposed to the guest.
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
--- Cc: Ian Jackson <iwj@xenproject.org> Cc: Wei Liu <wl@xen.org> Cc: Anthony PERARD <anthony.perard@citrix.com>
v5:
- New in v5
Paul Durrant [Tue, 24 Nov 2020 15:18:41 +0000 (15:18 +0000)]
libxl: introduce a 'libxl_xen_abi_features' enumeration...
... and bitmaps to enable or disable fetaures.
This patch adds a new 'libxl_xen_abi_features' enumeration into the IDL which
specifies features of the Xen ABI which may be optionally enabled or disabled
via new 'feature_enable' and 'feature_disable' bitaps added into
'libxl_domain_build_info'.
The initially defined features are enabled by default (for relevant
architectures) and so the corresponding flags in
'struct xen_domctl_createdomain' are set if they are missing from
'disable_features' rather than if they are present in 'enable_features'.
Checks are, however, added to make sure that features are not specifically
enabled in cases where they are not supported.
NOTE: A subsequent patch will add an option into xl.cfg(5) to control whether
Xen ABI features are enabled or disabled.
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
--- Cc: Ian Jackson <iwj@xenproject.org> Cc: Wei Liu <wl@xen.org> Cc: Anthony PERARD <anthony.perard@citrix.com>
v5:
- New in v5
Paul Durrant [Wed, 2 Dec 2020 14:13:09 +0000 (14:13 +0000)]
domctl: introduce a new domain create flag, XEN_DOMCTL_CDF_evtchn_upcall, ...
...to control the visibility of the per-vCPU upcall feature for HVM guests.
Commit 04447f4453c0 ("x86/hvm: add per-vcpu evtchn upcalls") added a mechanism
by which x86 HVM guests can register a vector for each vCPU which will be used
by Xen to signal event channels on that vCPU.
This facility (an HVMOP hypercall) appeared in a uncontrolled fashion which
has implications for the behaviour of OS when moving from an older Xen to a
newer Xen. For instance, the OS may be aware of the per-vCPU upcall feature
but support for it is buggy. In this case the OS will function perfectly well
on the older Xen, but fail (in a potentially non-obvious way) on the newer
Xen.
To maintain compatibility it is necessary to make Xen behave as it did
before the new hypercall was added and hence the code in this patch ensures
that, if XEN_DOMCTL_CDF_evtchn_upcall is not set, the hypercall will again
result in -ENOSYS being returned to the guest.
NOTE: To maintain current behavior, until a tool-stack option is added to
control the flag, it is unconditionally set for x86 HVM domains. A
subsequent patch will introduce such tool-stack control.
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
--- Cc: Ian Jackson <iwj@xenproject.org> Cc: Wei Liu <wl@xen.org> Cc: Anthony PERARD <anthony.perard@citrix.com> Cc: Andrew Cooper <andrew.cooper3@citrix.com> Cc: George Dunlap <george.dunlap@citrix.com> Cc: Jan Beulich <jbeulich@suse.com> Cc: Julien Grall <julien@xen.org> Cc: Stefano Stabellini <sstabellini@kernel.org> Cc: Christian Lindig <christian.lindig@citrix.com> Cc: David Scott <dave@recoil.org> Cc: "Roger Pau Monné" <roger.pau@citrix.com>
v5:
- New in v5
Paul Durrant [Tue, 24 Nov 2020 14:50:11 +0000 (14:50 +0000)]
domctl: introduce a new domain create flag, XEN_DOMCTL_CDF_evtchn_fifo, ...
...to control the visibility of the FIFO event channel operations
(EVTCHNOP_init_control, EVTCHNOP_expand_array, and EVTCHNOP_set_priority) to
the guest.
These operations were added to the public header in commit d2d50c2f308f
("evtchn: add FIFO-based event channel ABI") and the first implementation
appeared in the two subsequent commits: edc8872aeb4a ("evtchn: implement
EVTCHNOP_set_priority and add the set_priority hook") and 88910061ec61
("evtchn: add FIFO-based event channel hypercalls and port ops"). Prior to
that, a guest issuing those operations would receive a return value of
-ENOSYS (not implemented) from Xen. Guests aware of the FIFO operations but
running on an older (pre-4.4) Xen would fall back to using the 2-level event
channel interface upon seeing this return value.
Unfortunately the uncontrolable appearance of these new operations in Xen 4.4
onwards has implications for hibernation of some Linux guests. During resume
from hibernation, there are two kernels involved: the "boot" kernel and the
"resume" kernel. The guest boot kernel may default to use FIFO operations and
instruct Xen via EVTCHNOP_init_control to switch from 2-level to FIFO. On the
other hand, the resume kernel keeps assuming 2-level, because it was hibernated
on a version of Xen that did not support the FIFO operations.
To maintain compatibility it is necessary to make Xen behave as it did
before the new operations were added and hence the code in this patch ensures
that, if XEN_DOMCTL_CDF_evtchn_fifo is not set, the FIFO event channel
operations will again result in -ENOSYS being returned to the guest.
This patch also adds an extra log line into the 'e' key handler output to
call out which event channel ABI is in use by a domain.
NOTE: To maintain current behavior, until a tool-stack option is added to
control the flag, it is unconditionally set for all domains. A
subsequent patch will introduce such tool-stack control.
Signed-off-by: Paul Durrant <pdurrant@amazon.com> Signed-off-by: Eslam Elnikety <elnikety@amazon.com>
--- Cc: Ian Jackson <iwj@xenproject.org> Cc: Wei Liu <wl@xen.org> Cc: Anthony PERARD <anthony.perard@citrix.com> Cc: Andrew Cooper <andrew.cooper3@citrix.com> Cc: George Dunlap <george.dunlap@citrix.com> Cc: Jan Beulich <jbeulich@suse.com> Cc: Julien Grall <julien@xen.org> Cc: Stefano Stabellini <sstabellini@kernel.org> Cc: Christian Lindig <christian.lindig@citrix.com> Cc: David Scott <dave@recoil.org> Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com> Cc: "Roger Pau Monné" <roger.pau@citrix.com>
v5:
- Flip the sense of the flag from disabling to enabling, as requested by
Andrew
Paul Durrant [Wed, 11 Nov 2020 17:51:13 +0000 (17:51 +0000)]
xl / libxl: add 'ex_processor_mask' into 'libxl_viridian_enlightenment'
Adding the new value into the enumeration makes it immediately available
to xl, so this patch adjusts the xl.cfg(5) documentation accordingly.
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
--- Cc: Ian Jackson <iwj@xenproject.org> Cc: Wei Liu <wl@xen.org> Cc: Anthony PERARD <anthony.perard@citrix.com>
Paul Durrant [Wed, 11 Nov 2020 17:22:07 +0000 (17:22 +0000)]
viridian: add a new '_HVMPV_ex_processor_masks' bit into HVM_PARAM_VIRIDIAN...
... and advertise ExProcessorMasks support if it is set.
Support is advertised by setting bit 11 in CPUID:40000004:EAX.
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
--- Cc: Wei Liu <wl@xen.org> Cc: Jan Beulich <jbeulich@suse.com> Cc: Andrew Cooper <andrew.cooper3@citrix.com> Cc: "Roger Pau Monné" <roger.pau@citrix.com> Cc: George Dunlap <george.dunlap@citrix.com> Cc: Ian Jackson <iwj@xenproject.org> Cc: Julien Grall <julien@xen.org> Cc: Stefano Stabellini <sstabellini@kernel.org>
Paul Durrant [Wed, 11 Nov 2020 17:20:19 +0000 (17:20 +0000)]
viridian: log initial invocation of each type of hypercall
To make is simpler to observe which viridian hypercalls are issued by a
particular Windows guest, this patch adds a per-domain mask to track them.
Each type of hypercall causes a different bit to be set in the mask and
when the bit transitions from clear to set, a log line is emitted showing
the name of the hypercall and the domain that issued it.
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
--- Cc: Wei Liu <wl@xen.org> Cc: Jan Beulich <jbeulich@suse.com> Cc: Andrew Cooper <andrew.cooper3@citrix.com> Cc: "Roger Pau Monné" <roger.pau@citrix.com>
v2:
- Use DECLARE_BITMAP() for 'hypercall_flags'
- Use an enum for _HCALL_* values
Paul Durrant [Wed, 11 Nov 2020 12:27:04 +0000 (12:27 +0000)]
viridian: add ExProcessorMasks variant of the IPI hypercall
A previous patch introduced variants of the flush hypercalls that take a
'Virtual Processor Set' as an argument rather than a simple 64-bit mask.
This patch introduces a similar variant of the HVCALL_SEND_IPI hypercall
(HVCALL_SEND_IPI_EX).
NOTE: As with HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST_EX, a guest should
not yet issue the HVCALL_SEND_IPI_EX hypercall as support for
'ExProcessorMasks' is not yet advertised via CPUID.
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
--- Cc: Wei Liu <wl@xen.org> Cc: Jan Beulich <jbeulich@suse.com> Cc: Andrew Cooper <andrew.cooper3@citrix.com> Cc: "Roger Pau Monné" <roger.pau@citrix.com>
v3:
- Adjust prototype of new function
v2:
- Sanity check size before hvm_copy_from_guest_phys()
Paul Durrant [Wed, 11 Nov 2020 12:19:17 +0000 (12:19 +0000)]
viridian: add ExProcessorMasks variants of the flush hypercalls
The Microsoft Hypervisor TLFS specifies variants of the already implemented
HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST hypercalls that take a 'Virtual
Processor Set' as an argument rather than a simple 64-bit mask.
This patch adds a new hvcall_flush_ex() function to implement these
(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST_EX) hypercalls. This makes use of
new helper functions, hv_vpset_nr_banks() and hv_vpset_to_vpmask(), to
determine the size of the Virtual Processor Set (so it can be copied from
guest memory) and parse it into hypercall_vpmask (respectively).
NOTE: A guest should not yet issue these hypercalls as 'ExProcessorMasks'
support needs to be advertised via CPUID. This will be done in a
subsequent patch.
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
--- Cc: Wei Liu <wl@xen.org> Cc: Jan Beulich <jbeulich@suse.com> Cc: Andrew Cooper <andrew.cooper3@citrix.com> Cc: "Roger Pau Monné" <roger.pau@citrix.com>
v3:
- Adjust one of the helper macros
- A few more consts and type tweaks
- Adjust prototype of new function
v2:
- Add helper macros to define mask and struct sizes
- Use a union to determine the size of 'hypercall_vpset'
- Use hweight64() in hv_vpset_nr_banks()
- Sanity check size before hvm_copy_from_guest_phys()
Paul Durrant [Wed, 11 Nov 2020 10:13:19 +0000 (10:13 +0000)]
viridian: use softirq batching in hvcall_ipi()
vlapic_ipi() uses a softirq batching mechanism to improve the efficiency of
sending a IPIs to large number of processors. This patch modifies send_ipi()
(the worker function called by hvcall_ipi()) to also make use of the
mechanism when there multiple bits set the hypercall_vpmask.
Signed-off-by: Paul Durrant <pdurrant@amazon.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
--- Cc: Wei Liu <wl@xen.org> Cc: Andrew Cooper <andrew.cooper3@citrix.com> Cc: "Roger Pau Monné" <roger.pau@citrix.com>
v2:
- Don't add the 'nr' field to struct hypercall_vpmask and use
bitmap_weight() instead
Paul Durrant [Wed, 11 Nov 2020 09:50:18 +0000 (09:50 +0000)]
viridian: use hypercall_vpmask in hvcall_ipi()
A subsequent patch will need to IPI a mask of virtual processors potentially
wider than 64 bits. A previous patch introduced per-cpu hypercall_vpmask
to allow hvcall_flush() to deal with such wide masks. This patch modifies
the implementation of hvcall_ipi() to make use of the same mask structures,
introducing a for_each_vp() macro to facilitate traversing a mask.
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
--- Cc: Wei Liu <wl@xen.org> Cc: Jan Beulich <jbeulich@suse.com> Cc: Andrew Cooper <andrew.cooper3@citrix.com> Cc: "Roger Pau Monné" <roger.pau@citrix.com>
v3:
- Couple of extra 'const' qualifiers
v2:
- Drop the 'vp' loop now that vpmask_set() will do it internally
Paul Durrant [Wed, 11 Nov 2020 08:55:22 +0000 (08:55 +0000)]
viridian: introduce a per-cpu hypercall_vpmask and accessor functions...
... and make use of them in hvcall_flush()/need_flush().
Subsequent patches will need to deal with virtual processor masks potentially
wider than 64 bits. Thus, to avoid using too much stack, this patch
introduces global per-cpu virtual processor masks and converts the
implementation of hvcall_flush() to use them.
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
--- Cc: Wei Liu <wl@xen.org> Cc: Jan Beulich <jbeulich@suse.com> Cc: Andrew Cooper <andrew.cooper3@citrix.com> Cc: "Roger Pau Monné" <roger.pau@citrix.com>
v2:
- Modified vpmask_set() to take a base 'vp' and a 64-bit 'mask', still
looping over the mask as bitmap.h does not provide a primitive for copying
one mask into another at an offset
- Added ASSERTions to verify that we don't attempt to set or test bits
beyond the limit of the map
Paul Durrant [Wed, 11 Nov 2020 08:20:33 +0000 (08:20 +0000)]
viridian: move IPI hypercall implementation into separate function
This patch moves the implementation of HVCALL_SEND_IPI that is currently
inline in viridian_hypercall() into a new hvcall_ipi() function.
The new function returns Xen errno values similarly to hvcall_flush(). Hence
the errno translation code in viridial_hypercall() is generalized, removing
the need for the local 'status' variable.
NOTE: The formatting of the switch statement at the top of
viridial_hypercall() is also adjusted as per CODING_STYLE.
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
--- Cc: Wei Liu <wl@xen.org> Cc: Jan Beulich <jbeulich@suse.com> Cc: Andrew Cooper <andrew.cooper3@citrix.com> Cc: "Roger Pau Monné" <roger.pau@citrix.com>
v3:
- Adjust prototype of new function
Paul Durrant [Tue, 10 Nov 2020 18:22:32 +0000 (18:22 +0000)]
viridian: move flush hypercall implementation into separate function
This patch moves the implementation of HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST
that is currently inline in viridian_hypercall() into a new hvcall_flush()
function.
The new function returns Xen erro values which are then dealt with
appropriately. A return value of -ERESTART translates to viridian_hypercall()
returning HVM_HCALL_preempted. Other return values translate to status codes
and viridian_hypercall() returning HVM_HCALL_completed. Currently the only
values, other than -ERESTART, returned by hvcall_flush() are 0 (indicating
success) or -EINVAL.
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
--- Cc: Wei Liu <wl@xen.org> Cc: Jan Beulich <jbeulich@suse.com> Cc: Andrew Cooper <andrew.cooper3@citrix.com> Cc: "Roger Pau Monné" <roger.pau@citrix.com>
v3:
- Adjust prototype of new function
Paul Durrant [Thu, 19 Nov 2020 16:50:24 +0000 (16:50 +0000)]
viridian: don't blindly write to 32-bit registers if 'mode' is invalid
If hvm_guest_x86_mode() returns something other than 8 or 4 then
viridian_hypercall() will return immediately but, on the way out, will write
back status as if 'mode' was 4. This patch simply makes it leave the registers
alone.
NOTE: The formatting of the 'out' label and the switch statement are also
adjusted as per CODING_STYLE.
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
--- Cc: Wei Liu <wl@xen.org> Cc: Jan Beulich <jbeulich@suse.com> Cc: Andrew Cooper <andrew.cooper3@citrix.com> Cc: "Roger Pau Monné" <roger.pau@citrix.com>
v5:
- Fixed yet another CODING_STYLE violation.
Roger Pau Monné [Mon, 30 Nov 2020 13:06:38 +0000 (14:06 +0100)]
x86/vioapic: fix usage of index in place of GSI in vioapic_write_redirent
The usage of idx instead of the GSI in vioapic_write_redirent when
accessing gsi_assert_count can cause a PVH dom0 with multiple
vIO-APICs to lose interrupts in case a pin of a IO-APIC different than
the first one is unmasked with pending interrupts.
Switch to use gsi instead to fix the issue.
Fixes: 9f44b08f7d0e4 ('x86/vioapic: introduce support for multiple vIO APICS') Reported-by: Manuel Bouyer <bouyer@antioche.eu.org> Signed-off-by: Roger Pau Monné <roge.rpau@citrix.com> Tested-by: Manuel Bouyer <bouyer@antioche.eu.org> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Juergen Gross [Mon, 30 Nov 2020 13:05:39 +0000 (14:05 +0100)]
xen/events: rework fifo queue locking
Two cpus entering evtchn_fifo_set_pending() for the same event channel
can race in case the first one gets interrupted after setting
EVTCHN_FIFO_PENDING and when the other one manages to set
EVTCHN_FIFO_LINKED before the first one is testing that bit. This can
lead to evtchn_check_pollers() being called before the event is put
properly into the queue, resulting eventually in the guest not seeing
the event pending and thus blocking forever afterwards.
Note that commit 5f2df45ead7c1195 ("xen/evtchn: rework per event channel
lock") made the race just more obvious, while the fifo event channel
implementation had this race forever since the introduction and use of
per-channel locks, when an unmask operation was running in parallel with
an event channel send operation.
Using a spinlock for the per event channel lock had turned out
problematic due to some paths needing to take the lock are called with
interrupts off, so the lock would need to disable interrupts, which in
turn broke some use cases related to vm events.
For avoiding this race the queue locking in evtchn_fifo_set_pending()
needs to be reworked to cover the test of EVTCHN_FIFO_PENDING,
EVTCHN_FIFO_MASKED and EVTCHN_FIFO_LINKED, too. Additionally when an
event channel needs to change queues both queues need to be locked
initially, in order to avoid having a window with no lock held at all.
Reported-by: Jan Beulich <jbeulich@suse.com> Fixes: 5f2df45ead7c1195 ("xen/evtchn: rework per event channel lock") Fixes: de6acb78bf0e137c ("evtchn: use a per-event channel lock for sending events") Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Juergen Gross [Mon, 30 Nov 2020 13:04:34 +0000 (14:04 +0100)]
xen/events: modify struct evtchn layout
In order to avoid latent races when updating an event channel put
xen_consumer and pending fields in different bytes. This is no problem
right now, but especially the pending indicator isn't used only when
initializing an event channel (unlike xen_consumer), so any future
addition to this byte would need to be done with a potential race kept
in mind.
At the same time move some other fields around to have less implicit
paddings and to keep related fields more closely together.
Finally switch struct evtchn to no longer use fixed sized types where
not needed.
Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
It's confusing and not consistent with the terminology introduced with 'dfn_t'.
Just call them IOMMU page tables.
Also remove a pointless check of the 'acpi_drhd_units' list in
vtd_dump_page_table_level(). If the list is empty then IOMMU mappings would
not have been enabled for the domain in the first place.
NOTE: All calls to printk() have also been removed from
iommu_dump_page_tables(); the implementation specific code is now
responsible for all output.
The check for the global 'iommu_enabled' has also been replaced by an
ASSERT since iommu_dump_page_tables() is not registered as a key handler
unless IOMMU mappings are enabled.
Error messages are now prefixed with the name of the function.
Signed-off-by: Paul Durrant <pdurrant@amazon.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Paul Durrant [Fri, 27 Nov 2020 17:03:42 +0000 (18:03 +0100)]
iommu: remove the share_p2m operation
Sharing of HAP tables is now VT-d specific so the operation is never defined
for AMD IOMMU any more. There's also no need to pro-actively set vtd.pgd_maddr
when using shared EPT as it is straightforward to simply define a helper
function to return the appropriate value in the shared and non-shared cases.
NOTE: This patch also modifies unmap_vtd_domain_page() to take a const
pointer since the only thing it calls, unmap_domain_page(), also takes
a const pointer.
Signed-off-by: Paul Durrant <pdurrant@amazon.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Jan Beulich [Wed, 25 Nov 2020 13:08:14 +0000 (14:08 +0100)]
evtchn: double per-channel locking can't hit identical channels
Inter-domain channels can't possibly be bound to themselves, there's
always a 2nd channel involved, even when this is a loopback into the
same domain. As a result we can drop one conditional each from the two
involved functions.
With this, the number of evtchn_write_lock() invocations can also be
shrunk by half, swapping the two incoming function arguments instead.
Jan Beulich [Wed, 25 Nov 2020 13:07:36 +0000 (14:07 +0100)]
mm: check for truncation in vmalloc_type()
While it's currently implied from the checking xmalloc_array() does,
let's make this more explicit in the function itself. As a result both
involved local variables don't need to have size_t type anymore. This
brings them in line with the rest of the code in this file.
Paul Durrant [Wed, 25 Nov 2020 13:06:27 +0000 (14:06 +0100)]
xen/include: import sizeof_field() macro from Linux stddef.h
Co-locate it with the definition of offsetof() (since this is also in stddef.h
in the Linux kernel source). This macro will be needed in a subsequent patch.
Signed-off-by: Paul Durrant <pdurrant@amazon.com> Acked-by: Jan Beulich <jbeulich@suse.com>
Jan Beulich [Wed, 25 Nov 2020 13:05:52 +0000 (14:05 +0100)]
tools/libs: fix uninstall rule for header files
This again was working right only as long as $(LIBHEADER) consisted of
just one entry.
Fixes: bc44e2fb3199 ("tools: add a copy of library headers in tools/include") Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
Bertrand Marquis [Tue, 24 Nov 2020 11:12:15 +0000 (11:12 +0000)]
xen/arm: Add workaround for Cortex-A55 erratum #1530923
On the Cortex A55, TLB entries can be allocated by a speculative AT
instruction. If this is happening during a guest context switch with an
inconsistent page table state in the guest, TLBs with wrong values might
be allocated.
The ARM64_WORKAROUND_AT_SPECULATE workaround is used as for erratum 1165522 on Cortex A76 or Neoverse N1.
This change is also introducing the MIDR identifier for the Cortex-A55.
Jan Beulich [Tue, 24 Nov 2020 13:01:31 +0000 (14:01 +0100)]
memory: fix off-by-one in XSA-346 change
The comparison against ARRAY_SIZE() needs to be >= in order to avoid
overrunning the pages[] array.
This is XSA-355.
Fixes: 5777a3742d88 ("IOMMU: hold page ref until after deferred TLB flush") Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Julien Grall <jgrall@amazon.com>
Jan Beulich [Tue, 24 Nov 2020 10:28:41 +0000 (11:28 +0100)]
ns16550: drop stray "#ifdef CONFIG_HAS_PCI"
There's no point wrapping the function invocation when
- the function body is already suitably wrapped,
- the function itself is unconditionally available.
Jan Beulich [Tue, 24 Nov 2020 10:26:34 +0000 (11:26 +0100)]
x86/DMI: fix table mapping when one lives above 1Mb
Use of __acpi_map_table() is kind of an abuse here, and doesn't work
anymore for the majority of cases if any of the tables lives outside the
low first Mb. Keep this (ab)use only prior to reaching SYS_STATE_boot,
primarily to avoid needing to audit whether any of the calls here can
happen this early in the first place; quite likely this isn't necessary
at all - at least dmi_scan_machine() gets called late enough.
For the "normal" case, call __vmap() directly, despite effectively
duplicating acpi_os_map_memory(). There's one difference though: We
shouldn't need to establish UC- mappings, WP or r/o WB mappings ought to
be fine, as the tables are going to live in either RAM or ROM. Short of
having PAGE_HYPERVISOR_WP and wanting to map the tables r/o anyway, use
the latter of the two options. The r/o mapping implies some
constification of code elsewhere in the file. For code touched anyway
also switch to void (where possible) or uint8_t.
Fixes: 1c4aa69ca1e1 ("xen/acpi: Rework acpi_os_map_memory() and acpi_os_unmap_memory()") Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Jan Beulich [Tue, 24 Nov 2020 10:26:02 +0000 (11:26 +0100)]
x86/ACPI: fix mapping of FACS
acpi_fadt_parse_sleep_info() runs when the system is already in
SYS_STATE_boot. Hence its direct call to __acpi_map_table() won't work
anymore. This call should probably have been replaced long ago already,
as the layering violation hasn't been necessary for quite some time.
Fixes: 1c4aa69ca1e1 ("xen/acpi: Rework acpi_os_map_memory() and acpi_os_unmap_memory()") Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Juergen Gross [Tue, 24 Nov 2020 10:23:42 +0000 (11:23 +0100)]
xen/events: access last_priority and last_vcpu_id together
The queue for a fifo event is depending on the vcpu_id and the
priority of the event. When sending an event it might happen the
event needs to change queues and the old queue needs to be kept for
keeping the links between queue elements intact. For this purpose
the event channel contains last_priority and last_vcpu_id values
elements for being able to identify the old queue.
In order to avoid races always access last_priority and last_vcpu_id
with a single atomic operation avoiding any inconsistencies.
Andrew Cooper [Wed, 3 Apr 2019 16:53:15 +0000 (17:53 +0100)]
amd-iommu: Fix Guest CR3 Table following c/s 3a7947b6901
"amd-iommu: use a bitfield for DTE" renamed iommu_dte_set_guest_cr3()'s gcr3
parameter to gcr3_mfn but ended up with an off-by-PAGE_SIZE error when
extracting bits from the address.
get_guest_cr3_from_dte() and iommu_dte_set_guest_cr3() are (almost) getters
and setters for the same field, so should live together.
Rename them to dte_{get,set}_gcr3_table() to specifically avoid 'guest_cr3' in
the name. This field actually points to a table in memory containing an array
of guest CR3 values. As these functions are used for different logical
indirections, they shouldn't use gfn/mfn terminology for their parameters.
Switch them to use straight uint64_t full addresses.
Fixes: 3a7947b6901 ("amd-iommu: use a bitfield for DTE") Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com>
Jan Beulich [Fri, 20 Nov 2020 07:28:58 +0000 (08:28 +0100)]
AMD/IOMMU: avoid UB in guest CR3 retrieval
Found by looking for patterns similar to the one Julien did spot in
pci_vtd_quirks(). (Not that it matters much here, considering the code
is dead right now.)
Fixes: 3a7947b69011 ("amd-iommu: use a bitfield for DTE") Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Jan Beulich [Fri, 20 Nov 2020 07:25:17 +0000 (08:25 +0100)]
lib: split _ctype[] into its own object, under lib/
This is, besides for tidying, in preparation of then starting to use an
archive rather than an object file for generic library code which
arch-es (or even specific configurations within a single arch) may or
may not need.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Julien Grall <jgrall@amazon.com>
Julien Grall [Thu, 19 Nov 2020 17:08:27 +0000 (17:08 +0000)]
xen/arm: gic: acpi: Guard helpers to build the MADT with CONFIG_ACPI
gic_make_hwdom_madt() and gic_get_hwdom_madt_size() are ACPI specific.
While they build fine today, this will change in a follow-up patch.
Rather than trying to fix the build on ACPI, it is best to avoid
compiling the helpers and the associated callbacks when CONFIG_ACPI=n.
On CentOS 8 with SELinux containerize doesn't work at all:
Make sure that the source code and SSH agent directories are passed on
with SELinux relabeling enabled.
(`-security-opt label=disabled` would be another option)
Signed-off-by: Edwin Török <edvin.torok@citrix.com> Acked-by: Doug Goldstein <cardoe@cardoe.com>
Michal Orzel [Mon, 16 Nov 2020 12:11:40 +0000 (13:11 +0100)]
xen/arm: Add workaround for Cortex-A76/Neoverse-N1 erratum #1286807
On the affected Cortex-A76/Neoverse-N1 cores (r0p0 to r3p0),
if a virtual address for a cacheable mapping of a location is being
accessed by a core while another core is remapping the virtual
address to a new physical page using the recommended break-before-make
sequence, then under very rare circumstances TLBI+DSB completes before
a read using the translation being invalidated has been observed by
other observers. The workaround repeats the TLBI+DSB operation for all
the TLB flush operations. While this is stricly not necessary, we don't
want to take any risk.
Juergen Gross [Wed, 18 Nov 2020 11:38:29 +0000 (12:38 +0100)]
xen/x86: add nmi continuation framework
Actions in NMI context are rather limited as e.g. locking is rather
fragile.
Add a framework to continue processing in normal interrupt context
after leaving NMI processing.
This is done by a high priority interrupt vector triggered via a
self IPI from NMI context, which will then call the continuation
function specified during NMI handling.
Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Jan Beulich [Wed, 18 Nov 2020 11:38:01 +0000 (12:38 +0100)]
x86/vpt: fix build with old gcc
I believe it was the XSA-336 fix (42fcdd42328f "x86/vpt: fix race when
migrating timers between vCPUs") which has unmasked a bogus
uninitialized variable warning. This is observable with gcc 4.3.4, but
only on 4.13 and older; it's hidden on newer versions apparently due to
the addition to _read_unlock() done by 12509bbeb9e3 ("rwlocks: call
preempt_disable() when taking a rwlock").
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Jan Beulich [Wed, 18 Nov 2020 11:37:24 +0000 (12:37 +0100)]
x86/p2m: split write_p2m_entry() hook
Fair parts of the present handlers are identical; in fact
nestedp2m_write_p2m_entry() lacks a call to p2m_entry_modify(). Move
common parts right into write_p2m_entry(), splitting the hooks into a
"pre" one (needed just by shadow code) and a "post" one.
For the common parts moved I think that the p2m_flush_nestedp2m() is,
at least from an abstract perspective, also applicable in the shadow
case. Hence it doesn't get a 3rd hook put in place.
The initial comment that was in hap_write_p2m_entry() gets dropped: Its
placement was bogus, and looking back the the commit introducing it
(dd6de3ab9985 "Implement Nested-on-Nested") I can't see either what use
of a p2m it was meant to be associated with.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Tim Deegan <tim@xen.org> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Jan Beulich [Wed, 18 Nov 2020 11:34:54 +0000 (12:34 +0100)]
x86/HAP: move nested-P2M flush calculations out of locked region
By latching the old MFN into a local variable, these calculations don't
depend on anything but local variables anymore. Hence the point in time
when they get performed doesn't matter anymore, so they can be moved
past the locked region.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Jan Beulich [Wed, 18 Nov 2020 11:33:18 +0000 (12:33 +0100)]
x86/p2m: collapse the two ->write_p2m_entry() hooks
The struct paging_mode instances get set to the same functions
regardless of mode by both HAP and shadow code, hence there's no point
having this hook there. The hook also doesn't need moving elsewhere - we
can directly use struct p2m_domain's. This merely requires (from a
strictly formal pov; in practice this may not even be needed) making
sure we don't end up using safe_write_pte() for nested P2Ms.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Tim Deegan <tim@xen.org> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Penny Zheng [Mon, 9 Nov 2020 08:21:10 +0000 (16:21 +0800)]
xen/arm: Add Cortex-A73 erratum 858921 workaround
CNTVCT_EL0 or CNTPCT_EL0 counter read in Cortex-A73 (all versions)
might return a wrong value when the counter crosses a 32bit boundary.
Until now, there is no case for Xen itself to access CNTVCT_EL0,
and it also should be the Guest OS's responsibility to deal with
this part.
But for CNTPCT, there exists several cases in Xen involving reading
CNTPCT, so a possible workaround is that performing the read twice,
and to return one or the other depending on whether a transition has
taken place.
Jan Beulich [Wed, 11 Nov 2020 07:57:32 +0000 (08:57 +0100)]
x86/p2m: paging_write_p2m_entry() is a private function
As it gets installed by p2m_pt_init(), it doesn't need to live in
paging.c. The function working in terms of l1_pgentry_t even further
indicates its non-paging-generic nature. Move it and drop its
paging_ prefix, not adding any new one now that it's static.
This then also makes more obvious that in the EPT case we wouldn't
risk mistakenly calling through the NULL hook pointer.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Julien Grall [Mon, 9 Nov 2020 20:28:59 +0000 (20:28 +0000)]
xen/arm: Always trap AMU system registers
The Activity Monitors Unit (AMU) has been introduced by ARMv8.4. It is
considered to be unsafe to be expose to guests as they might expose
information about code executed by other guests or the host.
Arm provided a way to trap all the AMU system registers by setting
CPTR_EL2.TAM to 1.
Unfortunately, on older revision of the specification, the bit 30 (now
CPTR_EL1.TAM) was RES0. Because of that, Xen is setting it to 0 and
therefore the system registers would be exposed to the guest when it is
run on processors with AMU.
As the bit is mark as UNKNOWN at boot in Armv8.4, the only safe solution
for us is to always set CPTR_EL1.TAM to 1.
Guest trying to access the AMU system registers will now receive an
undefined instruction. Unfortunately, this means that even well-behaved
guest may fail to boot because we don't sanitize the ID registers.
This is a known issues with other Armv8.0+ features (e.g. SVE, Pointer
Auth). This will taken care separately.
Jan Beulich [Tue, 10 Nov 2020 13:39:03 +0000 (14:39 +0100)]
x86/CPUID: don't use UB shift when library is built as 32-bit
At least the insn emulator test harness will continue to be buildable
(and ought to continue to be usable) also as a 32-bit binary. (Right now
the CPU policy test harness is, too, but there it may be less relevant
to keep it functional, just like e.g. we don't support fuzzing the insn
emulator in 32-bit mode.) Hence the library code needs to cope with
this.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
With the event channel lock no longer disabling interrupts commit 52e1fc47abc3a0123 ("evtchn/Flask: pre-allocate node on send path") can
be reverted again.
Signed-off-by: Juergen Gross <jgross@suse.com> Acked-by: Jan Beulich <jbeulich@suse.com>
Juergen Gross [Tue, 10 Nov 2020 13:36:15 +0000 (14:36 +0100)]
xen/evtchn: rework per event channel lock
Currently the lock for a single event channel needs to be taken with
interrupts off, which causes deadlocks in some cases.
Rework the per event channel lock to be non-blocking for the case of
sending an event and removing the need for disabling interrupts for
taking the lock.
The lock is needed for avoiding races between event channel state
changes (creation, closing, binding) against normal operations (set
pending, [un]masking, priority changes).
Use a rwlock, but with some restrictions:
- Changing the state of an event channel (creation, closing, binding)
needs to use write_lock(), with ASSERT()ing that the lock is taken as
writer only when the state of the event channel is either before or
after the locked region appropriate (either free or unbound).
- Sending an event needs to use read_trylock() mostly, in case of not
obtaining the lock the operation is omitted. This is needed as
sending an event can happen with interrupts off (at least in some
cases).
- Dumping the event channel state for debug purposes is using
read_trylock(), too, in order to avoid blocking in case the lock is
taken as writer for a long time.
- All other cases can use read_lock().
Fixes: e045199c7c9c54 ("evtchn: address races with evtchn_reset()") Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Julien Grall <jgrall@amazon.com>
Roger Pau Monné [Tue, 6 Oct 2020 16:23:27 +0000 (18:23 +0200)]
x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL}
Currently a PV hardware domain can also be given control over the CPU
frequency, and such guest is allowed to write to MSR_IA32_PERF_CTL.
However since commit 322ec7c89f6 the default behavior has been changed
to reject accesses to not explicitly handled MSRs, preventing PV
guests that manage CPU frequency from reading
MSR_IA32_PERF_{STATUS/CTL}.
Additionally some HVM guests (Windows at least) will attempt to read
MSR_IA32_PERF_CTL and will panic if given back a #GP fault:
Move the handling of MSR_IA32_PERF_{STATUS/CTL} to the common MSR
handling shared between HVM and PV guests, and add an explicit case
for reads to MSR_IA32_PERF_{STATUS/CTL}.
Restore previous behavior and allow PV guests with the required
permissions to read the contents of the mentioned MSRs. Non privileged
guests will get 0 when trying to read those registers, as writes to
MSR_IA32_PERF_CTL by such guest will already be silently dropped.
Fixes: 322ec7c89f6 ('x86/pv: disallow access to unknown MSRs') Fixes: 84e848fd7a1 ('x86/hvm: disallow access to unknown MSRs') Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Jason Andryuk [Thu, 29 Oct 2020 19:03:32 +0000 (15:03 -0400)]
libxl: Add suppress-vmdesc to QEMU machine
The device model state saved by QMP xen-save-devices-state doesn't
include the vmdesc json. When restoring an HVM, xen-load-devices-state
always triggers "Expected vmdescription section, but got 0". This is
not a problem when restore comes from a file. However, when QEMU runs
in a linux stubdom and comes over a console, EOF is not received. This
causes a delay restoring - though it does restore.
Setting suppress-vmdesc skips looking for the vmdesc during restore and
avoids the wait.
QEMU 5.2 enables suppress-vmdesc by default for xenfv, but this change
sets it manually for xenfv and xen_platform_pci=0 when -machine pc is
use.
QEMU commit 9850c6047b8b "migration: Allow to suppress vmdesc
submission" added suppress-vmdesc in QEMU 2.3.
Signed-off-by: Jason Andryuk <jandryuk@gmail.com> Acked-by: Anthony PERARD <anthony.perard@citrix.com>
Setting vuart_gfn was missed when switching ARM guests to the PVH build.
Like libxl__build_pv, libxl__build_hvm should set state->vuart_gfn to
dom->vuart_gfn.
Without this change, xl console cannot connect to the vuart console (-t
vuart), see https://marc.info/?l=xen-devel&m=160402342101366.
Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com> Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
Juergen Gross [Fri, 6 Nov 2020 09:47:09 +0000 (10:47 +0100)]
xen/locking: harmonize spinlocks and rwlocks regarding preemption
Spinlocks and rwlocks behave differently in the try variants regarding
preemption: rwlocks are switching preemption off before testing the
lock, while spinlocks do so only after the first check.
Modify _spin_trylock() to disable preemption before testing the lock
to be held in order to be preemption-ready.
Jan Beulich [Thu, 5 Nov 2020 15:48:55 +0000 (16:48 +0100)]
libxl: fix libacpi dependency
$(DSDT_FILES-y) depends on the recursive make to have run in libacpi/
such that the file(s) itself/themselves were generated before
compilation gets attempted. The same, however, is also necessary for
generated headers, before source files including them would get
attempted to be compiled.
The dependency specified in libacpi's Makefile, otoh, is entirely
pointless nowadays - no compilation happens there anymore (except for
tools involved in building the generated files). Together with it, the
rule generating acpi.a also can go away.
Reported-by: Olaf Hering <olaf@aepfle.de> Fixes: 14c0d328da2b ("libxl/acpi: Build ACPI tables for HVMlite guests") Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Wei Liu <wl@xen.org>
Juergen Gross [Wed, 4 Nov 2020 08:26:42 +0000 (09:26 +0100)]
xen/spinlocks: spin_trylock with interrupts off is always fine
Even if a spinlock was taken with interrupts on before calling
spin_trylock() with interrupts off is fine, as it can't block.
Add a bool parameter "try" to check_lock() for handling this case.
Remove the call of check_lock() from _spin_is_locked(), as it really
serves no purpose and it can even lead to false crashes, e.g. when
a lock was taken correctly with interrupts enabled and the call of
_spin_is_locked() happened with interrupts off. In case the lock is
taken with wrong interrupt flags this will be catched when taking
the lock.
Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Ian Jackson [Wed, 19 Aug 2020 17:31:45 +0000 (18:31 +0100)]
SUPPORT.md: Desupport qemu trad except stub dm
While investigating XSA-335 we discovered that many upstream security
fixes were missing. It is not practical to backport them. There is
no good reason to be running this very ancient version of qemu, except
that it is the only way to run a stub dm which is currently supported
by upstream.
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
The BAD_MADT_ENTRY() macro is designed to work for all of the subtables
of the MADT. In the ACPI 5.1 version of the spec, the struct for the
GICC subtable (struct acpi_madt_generic_interrupt) is 76 bytes long; in
ACPI 6.0, the struct is 80 bytes long. But, there is only one definition
in ACPICA for this struct -- and that is the 6.0 version. Hence, when
BAD_MADT_ENTRY() compares the struct size to the length in the GICC
subtable, it fails if 5.1 structs are in use, and there are systems in
the wild that have them.
This patch adds the BAD_MADT_GICC_ENTRY() that checks the GICC subtable
only, accounting for the difference in specification versions that are
possible. The BAD_MADT_ENTRY() will continue to work as is for all other
MADT subtables.
This code is being added to an arm64 header file since that is currently
the only architecture using the GICC subtable of the MADT. As a GIC is
specific to ARM, it is also unlikely the subtable will be used elsewhere.
Fixes: aeb823bbacc2 ("ACPICA: ACPI 6.0: Add changes for FADT table.") Signed-off-by: Al Stone <al.stone@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: "Rafael J. Wysocki" <rjw@rjwysocki.net>
[catalin.marinas@arm.com: extra brackets around macro arguments] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Julien Grall <julien.grall@arm.com> Signed-off-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Julien Grall <jgrall@amazon.com> Acked-by: Stefano Stabellini <sstabellini@kernel.org> Tested-by: Elliott Mitchell <ehem+xen@m5p.com>
xen/arm: Check if the platform is not using ACPI before initializing Dom0less
Dom0less requires a device-tree. However, since commit 6e3e77120378
"xen/arm: setup: Relocate the Device-Tree later on in the boot", the
device-tree will not get unflatten when using ACPI.
This will lead to a crash during boot.
Given the complexity to setup dom0less with ACPI (for instance how to
assign device?), we should skip any code related to Dom0less when using
ACPI.
xen/arm: acpi: The fixmap area should always be cleared during failure/unmap
Commit 022387ee1ad3 "xen/arm: mm: Don't open-code Xen PT update in
{set, clear}_fixmap()" enforced that each set_fixmap() should be
paired with a clear_fixmap(). Any failure to follow the model would
result to a platform crash.
Unfortunately, the use of fixmap in the ACPI code was overlooked as it
is calling set_fixmap() but not clear_fixmap().
The function __acpi_os_map_table() is reworked so:
- We know before the mapping whether the fixmap region is big
enough for the mapping.
- It will fail if the fixmap is already in use. This is not a
change of behavior but clarifying the current expectation to avoid
hitting a BUG().
The function __acpi_os_unmap_table() will now call clear_fixmap().
xen/acpi: Rework acpi_os_map_memory() and acpi_os_unmap_memory()
The functions acpi_os_{un,}map_memory() are meant to be arch-agnostic
while the __acpi_os_{un,}map_memory() are meant to be arch-specific.
Currently, the former are still containing x86 specific code.
To avoid this rather strange split, the generic helpers are reworked so
they are arch-agnostic. This requires the introduction of a new helper
__acpi_os_unmap_memory() that will undo any mapping done by
__acpi_os_map_memory().
Currently, the arch-helper for unmap is basically a no-op so it only
returns whether the mapping was arch specific. But this will change
in the future.
Note that the x86 version of acpi_os_map_memory() was already able to
able the 1MB region. Hence why there is no addition of new code.
Jan Beulich [Fri, 30 Oct 2020 13:30:35 +0000 (14:30 +0100)]
x86: fix build of PV shim when !GRANT_TABLE
To do its compat translation, shim code needs to include the compat
header. For this to work, this header first of all needs to be
generated.
Reported-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Roger Pau Monné [Fri, 30 Oct 2020 13:28:03 +0000 (14:28 +0100)]
x86/hvm: process softirq while saving/loading entries
On slow systems with sync_console saving or loading the context of big
guests can cause the watchdog to trigger. Fix this by adding a couple
of process_pending_softirqs.
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com>
Jan Beulich [Fri, 30 Oct 2020 13:27:23 +0000 (14:27 +0100)]
x86/shadow: correct GFN use by sh_unshadow_for_p2m_change()
Luckily sh_remove_all_mappings()'s use of the parameter is limited to
generation of log messages. Nevertheless we'd better pass correct GFNs
around:
- the incoming GFN, when replacing a large page, may not be large page
aligned,
- incrementing by page-size-scaled values can't be right.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Tim Deegan <tim@xen.org>
Jan Beulich [Fri, 30 Oct 2020 13:26:46 +0000 (14:26 +0100)]
x86/shadow: sh_{make,destroy}_monitor_table() are "even more" HVM-only
With them depending on just the number of shadow levels, there's no need
for more than one instance of them, and hence no need for any hook (IOW 452219e24648 ["x86/shadow: monitor table is HVM-only"] didn't go quite
far enough). Move the functions to hvm.c while dropping the dead
is_pv_32bit_domain() code paths.
While moving the code, replace a stale comment reference to
sh_install_xen_entries_in_l4(). Doing so made me notice the function
also didn't have its prototype dropped in 8d7b633adab7 ("x86/mm:
Consolidate all Xen L4 slot writing into init_xen_l4_slots()"), which
gets done here as well.
Also make their first parameters const.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Tim Deegan <tim@xen.org>
Bertrand Marquis [Mon, 26 Oct 2020 16:21:33 +0000 (16:21 +0000)]
xen/arm: Warn user on cpu errata 832075
When a Cortex A57 processor is affected by CPU errata 832075, a guest
not implementing the workaround for it could deadlock the system.
Add a warning during boot informing the user that only trusted guests
should be executed on the system.
An equivalent warning is already given to the user by KVM on cores
affected by this errata.
Also taint the hypervisor as unsecure when this errata applies and
mention Cortex A57 r0p0 - r1p2 as not security supported in SUPPORT.md
Andrew Cooper [Thu, 29 Oct 2020 12:03:43 +0000 (12:03 +0000)]
x86/pv: Drop stale comment in dom0_construct_pv()
This comment was introduced by c/s 22a857bde9b8 in 2003, and became stale with
c/s 99db02d50976 also in 2003. Both of these predate the introduction of
struct vcpu, when the processor field moved object.
17 years is long enough for this comment to be mis-informing people reading
the code.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com>
Andrew Cooper [Mon, 19 Oct 2020 14:51:22 +0000 (15:51 +0100)]
x86/pv: Flush TLB in response to paging structure changes
With MMU_UPDATE, a PV guest can make changes to higher level pagetables. This
is safe from Xen's point of view (as the update only affects guest mappings),
and the guest is required to flush (if necessary) after making updates.
However, Xen's use of linear pagetables (UPDATE_VA_MAPPING, GNTTABOP_map,
writeable pagetables, etc.) is an implementation detail outside of the
API/ABI.
Changes in the paging structure require invalidations in the linear pagetable
range for subsequent accesses into the linear pagetables to access non-stale
mappings. Xen must provide suitable flushing to prevent intermixed guest
actions from accidentally accessing/modifying the wrong pagetable.
For all L2 and higher modifications, flush the TLB. PV guests cannot create
L2 or higher entries with the Global bit set, so no mappings established in
the linear range can be global. (This could in principle be an order 39 flush
starting at LINEAR_PT_VIRT_START, but no such mechanism exists in practice.)
Express the necessary flushes as a set of booleans which accumulate across the
operation. Comment the flushing logic extensively.
This is XSA-286.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Andrew Cooper [Thu, 22 Oct 2020 10:28:58 +0000 (11:28 +0100)]
x86/pv: Drop FLUSH_TLB_GLOBAL in do_mmu_update() for XPTI
c/s 9d1d31ad9498 "x86: slightly reduce Meltdown band-aid overhead" removed the
use of Global TLB flushes on the Xen entry path, but added a FLUSH_TLB_GLOBAL
to the L4 path in do_mmu_update().
However, this was unnecessary.
It is the guests responsibility to perform appropriate TLB flushing if the L4
modification altered an established mapping in a flush-relevant way. In this
case, an MMUEXT_OP hypercall will follow. The case which Xen needs to cover
is when new mappings are created, and the resync on the exit-to-guest path
covers this correctly.
There is a corner case with multiple vCPUs in hypercalls at the same time,
which 9d1d31ad9498 changed, and this patch changes back to its original XPTI
behaviour.
Architecturally, established TLB entries can continue to be used until the
broadcast flush has completed. Therefore, even with concurrent hypercalls,
the guest cannot depend on older mappings not being used until an MMUEXT_OP
hypercall completes. Xen's implementation of guest-initiated flushes will
take correct effect on top of an in-progress hypercall, picking up new mapping
setting before the other vCPU's MMUEXT_OP completes.
Note: The correctness of this change is not impacted by whether XPTI uses
global mappings or not. Correctness there depends on the behaviour of Xen on
the entry/exit paths when switching two/from the XPTI "shadow" pagetables.
This is (not really) XSA-286 (but necessary to simplify the logic).
Fixes: 9d1d31ad9498 ("x86: slightly reduce Meltdown band-aid overhead") Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>