Olaf Hering [Wed, 19 Apr 2023 11:00:26 +0000 (11:00 +0000)]
tools: ipxe: update for fixing build with GCC12
Use a snapshot which includes commit b0ded89e917b48b73097d3b8b88dfa3afb264ed0 ("[build] Disable dangling
pointer checking for GCC"), which fixes build with gcc12.
Signed-off-by: Olaf Hering <olaf@aepfle.de> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
(cherry picked from commit 18a36b4a9b088875486cfe33a2d4a8ae7eb4ab47)
Andrew Cooper [Tue, 5 Mar 2024 11:01:22 +0000 (12:01 +0100)]
x86/cpu-policy: Allow for levelling of VERW side effects
MD_CLEAR and FB_CLEAR need OR-ing across a migrate pool. Allow this, by
having them unconditinally set in max, with the host values reflected in
default. Annotate the bits as having special properies.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
master commit: de17162cafd27f2865a3102a2ec0f386a02ed03d
master date: 2024-03-01 20:14:19 +0000
Roger Pau Monné [Tue, 5 Mar 2024 11:00:47 +0000 (12:00 +0100)]
x86/altcall: always use a temporary parameter stashing variable
The usage in ALT_CALL_ARG() on clang of:
register union {
typeof(arg) e;
const unsigned long r;
} ...
When `arg` is the first argument to alternative_{,v}call() and
const_vlapic_vcpu() is used results in clang 3.5.0 complaining with:
arch/x86/hvm/vlapic.c:141:47: error: non-const static data member must be initialized out of line
alternative_call(hvm_funcs.test_pir, const_vlapic_vcpu(vlapic), vec) )
Workaround this by pulling `arg1` into a local variable, like it's done for
further arguments (arg2, arg3...)
Originally arg1 wasn't pulled into a variable because for the a1_ register
local variable the possible clobbering as a result of operators on other
variables don't matter:
Note clang version 3.8.1 seems to already be fixed and don't require the
workaround, but since it's harmless do it uniformly everywhere.
Reported-by: Andrew Cooper <andrew.cooper3@citrix.com> Fixes: 2ce562b2a413 ('x86/altcall: use a union as register type for function parameters on clang') Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com>
master commit: c20850540ad6a32f4fc17bde9b01c92b0df18bf0
master date: 2024-02-29 08:21:49 +0100
Jason Andryuk [Tue, 5 Mar 2024 11:00:30 +0000 (12:00 +0100)]
libxl: Fix segfault in device_model_spawn_outcome
libxl__spawn_qdisk_backend() explicitly sets guest_config to NULL when
starting QEMU (the usual launch through libxl__spawn_local_dm() has a
guest_config though).
Bail early on a NULL guest_config/d_config. This skips the QMP queries
for chardevs and VNC, but this xenpv QEMU instance isn't expected to
provide those - only qdisk (or 9pfs backends after an upcoming change).
Signed-off-by: Jason Andryuk <jandryuk@gmail.com> Acked-by: Anthony PERARD <anthony.perard@citrix.com>
master commit: d4f3d35f043f6ef29393166b0dd131c8102cf255
master date: 2024-02-29 08:18:38 +0100
Roger Pau Monné [Tue, 5 Mar 2024 10:59:43 +0000 (11:59 +0100)]
xen/livepatch: fix norevert test attempt to open-code revert
The purpose of the norevert test is to install a dummy handler that replaces
the internal Xen revert code, and then perform the revert in the post-revert
hook. For that purpose the usage of the previous common_livepatch_revert() is
not enough, as that just reverts specific functions, but not the whole state of
the payload.
Remove both common_livepatch_{apply,revert}() and instead expose
revert_payload{,_tail}() in order to perform the patch revert from the
post-revert hook.
Fixes: 6047104c3ccc ('livepatch: Add per-function applied/reverted state tracking marker') Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Ross Lagerwall <ross.lagerwall@citrix.com>
master commit: cdae267ce10d04d71d1687b5701ff2911a96b6dc
master date: 2024-02-28 16:57:25 +0000
Roger Pau Monné [Tue, 5 Mar 2024 10:59:35 +0000 (11:59 +0100)]
xen/livepatch: search for symbols in all loaded payloads
When checking if an address belongs to a patch, or when resolving a symbol,
take into account all loaded livepatch payloads, even if not applied.
This is required in order for the pre-apply and post-revert hooks to work
properly, or else Xen won't detect the instruction pointer belonging to those
hooks as being part of the currently active text.
Move the RCU handling to be used for payload_list instead of applied_list, as
now the calls from trap code will iterate over the payload_list.
Roger Pau Monné [Tue, 5 Mar 2024 10:59:26 +0000 (11:59 +0100)]
xen/livepatch: register livepatch regions when loaded
Currently livepatch regions are registered as virtual regions only after the
livepatch has been applied.
This can lead to issues when using the pre-apply or post-revert hooks, as at
that point the livepatch is not in the virtual regions list. If a livepatch
pre-apply hook contains a WARN() it would trigger an hypervisor crash, as the
code to handle the bug frame won't be able to find the instruction pointer that
triggered the #UD in any of the registered virtual regions, and hence crash.
Fix this by adding the livepatch payloads as virtual regions as soon as loaded,
and only remove them once the payload is unloaded. This requires some changes
to the virtual regions code, as the removal of the virtual regions is no longer
done in stop machine context, and hence an RCU barrier is added in order to
make sure there are no users of the virtual region after it's been removed from
the list.
Roger Pau Monné [Tue, 5 Mar 2024 10:58:36 +0000 (11:58 +0100)]
x86/spec: do not print thunk option selection if not built-in
Since the thunk built-in enable is printed as part of the "Compiled-in
support:" line, avoid printing anything in "Xen settings:" if the thunk is
disabled at build time.
Note the BTI-Thunk option printing is also adjusted to print a colon in the
same way the other options on the line do.
Requested-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 576528a2a742069af203e90c613c5c93e23c9755
master date: 2024-02-27 14:58:40 +0100
Andrew Cooper [Tue, 5 Mar 2024 10:57:02 +0000 (11:57 +0100)]
xen/sched: Fix UB shift in compat_set_timer_op()
Tamas reported this UBSAN failure from fuzzing:
(XEN) ================================================================================
(XEN) UBSAN: Undefined behaviour in common/sched/compat.c:48:37
(XEN) left shift of negative value -2147425536
(XEN) ----[ Xen-4.19-unstable x86_64 debug=y ubsan=y Not tainted ]----
...
(XEN) Xen call trace:
(XEN) [<ffff82d040307c1c>] R ubsan.c#ubsan_epilogue+0xa/0xd9
(XEN) [<ffff82d040308afb>] F __ubsan_handle_shift_out_of_bounds+0x11a/0x1c5
(XEN) [<ffff82d040307758>] F compat_set_timer_op+0x41/0x43
(XEN) [<ffff82d04040e4cc>] F hvm_do_multicall_call+0x77f/0xa75
(XEN) [<ffff82d040519462>] F arch_do_multicall_call+0xec/0xf1
(XEN) [<ffff82d040261567>] F do_multicall+0x1dc/0xde3
(XEN) [<ffff82d04040d2b3>] F hvm_hypercall+0xa00/0x149a
(XEN) [<ffff82d0403cd072>] F vmx_vmexit_handler+0x1596/0x279c
(XEN) [<ffff82d0403d909b>] F vmx_asm_vmexit_handler+0xdb/0x200
Left-shifting any negative value is strictly undefined behaviour in C, and
the two parameters here come straight from the guest.
The fuzzer happened to choose lo 0xf, hi 0x8000e300.
Switch everything to be unsigned values, making the shift well defined.
As GCC documents:
As an extension to the C language, GCC does not use the latitude given in
C99 and C11 only to treat certain aspects of signed '<<' as undefined.
However, -fsanitize=shift (and -fsanitize=undefined) will diagnose such
cases.
this was deemed not to need an XSA.
Note: The unsigned -> signed conversion for do_set_timer_op()'s s_time_t
parameter is also well defined. C makes it implementation defined, and GCC
defines it as reduction modulo 2^N to be within range of the new type.
Fixes: 2942f45e09fb ("Enable compatibility mode operation for HYPERVISOR_sched_op and HYPERVISOR_set_timer_op.") Reported-by: Tamas K Lengyel <tamas@tklengyel.com> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: ae6d4fd876765e6d623eec67d14f5d0464be09cb
master date: 2024-02-01 19:52:44 +0000
Jan Beulich [Tue, 5 Mar 2024 10:56:31 +0000 (11:56 +0100)]
x86/HVM: hide SVM/VMX when their enabling is prohibited by firmware
... or we fail to enable the functionality on the BSP for other reasons.
The only place where hardware announcing the feature is recorded is the
raw CPU policy/featureset.
Inspired by https://lore.kernel.org/all/20230921114940.957141-1-pbonzini@redhat.com/.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
master commit: 0b5f149338e35a795bf609ce584640b0977f9e6c
master date: 2024-01-09 14:06:34 +0100
The failure is reported for the following line:
(paddr_t)(uintptr_t)(_start + boot_phys_offset)
This occurs because the compiler treats (ptr + size) with size bigger than
PTRDIFF_MAX as undefined behavior. To address this, switch to macro
virt_to_maddr(), given the future plans to eliminate boot_phys_offset.
Jan Beulich [Tue, 27 Feb 2024 13:12:11 +0000 (14:12 +0100)]
x86: account for shadow stack in exception-from-stub recovery
Dealing with exceptions raised from within emulation stubs involves
discarding return address (replaced by exception related information).
Such discarding of course also requires removing the corresponding entry
from the shadow stack.
Also amend the comment in fixup_exception_return(), to further clarify
why use of ptr[1] can't be an out-of-bounds access.
This is CVE-2023-46841 / XSA-451.
Fixes: 209fb9919b50 ("x86/extable: Adjust extable handling to be shadow stack compatible") Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 91f5f7a9154919a765c3933521760acffeddbf28
master date: 2024-02-27 13:49:22 +0100
Roger Pau Monné [Tue, 27 Feb 2024 13:11:40 +0000 (14:11 +0100)]
x86/spec: fix BRANCH_HARDEN option to only be set when build-enabled
The current logic to handle the BRANCH_HARDEN option will report it as enabled
even when build-time disabled. Fix this by only allowing the option to be set
when support for it is built into Xen.
However with -O2 clang will generate incorrect code, given the following
example:
unsigned int func(uint8_t t)
{
return t;
}
static void bar(uint8_t b)
{
int ret_;
register uint8_t di asm("rdi") = b;
register unsigned long si asm("rsi");
register unsigned long dx asm("rdx");
register unsigned long cx asm("rcx");
register unsigned long r8 asm("r8");
register unsigned long r9 asm("r9");
register unsigned long r10 asm("r10");
register unsigned long r11 asm("r11");
Note the truncation of the unsigned int parameter 'a' of foo() to uint8_t when
passed into bar() is lost. clang doesn't zero extend the parameters in the
callee when required, as the psABI mandates.
The above can be worked around by using a union when defining the register
variables, so that `di` becomes:
register union {
uint8_t e;
unsigned long r;
} di asm("rdi") = { .e = b };
Which results in following code generated for `foo()`:
foo: # @foo
movzbl %dil, %edi
callq func
retq
So the truncation is not longer lost. Apply such workaround only when built
with clang.
Roger Pau Monné [Tue, 27 Feb 2024 13:10:39 +0000 (14:10 +0100)]
xen/cmdline: fix printf format specifier in no_config_param()
'*' sets the width field, which is the minimum number of characters to output,
but what we want in no_config_param() is the precision instead, which is '.*'
as it imposes a maximum limit on the output.
Fixes: 68d757df8dd2 ('x86/pv: Options to disable and/or compile out 32bit PV support') Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: ef101f525173cf51dc70f4c77862f6f10a8ddccf
master date: 2024-02-26 10:17:40 +0100
Jan Beulich [Tue, 27 Feb 2024 13:09:55 +0000 (14:09 +0100)]
x86emul: add missing EVEX.R' checks
EVEX.R' is not ignored in 64-bit code when encoding a GPR or mask
register. While for mask registers suitable checks are in place (there
also covering EVEX.R), they were missing for the few cases where in
EVEX-encoded instructions ModR/M.reg encodes a GPR. While for VPEXTRW
the bit is replaced before an emulation stub is invoked, for
VCVT{,T}{S,D,H}2{,U}SI this actually would have led to #UD from inside
an emulation stub, in turn raising #UD to the guest, but accompanied by
log messages indicating something's wrong in Xen nevertheless.
Fixes: 001bd91ad864 ("x86emul: support AVX512{F,BW,DQ} extract insns") Fixes: baf4a376f550 ("x86emul: support AVX512F legacy-equivalent scalar int/FP conversion insns") Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: cb319824bfa8d3c9ea0410cc71daaedc3e11aa2a
master date: 2024-02-22 11:54:07 +0100
Jan Beulich [Tue, 27 Feb 2024 13:09:37 +0000 (14:09 +0100)]
build: make sure build fails when running kconfig fails
Because of using "-include", failure to (re)build auto.conf (with
auto.conf.cmd produced as a secondary target) won't stop make from
continuing the build. Arrange for it being possible to drop the - from
Rules.mk, requiring that the include be skipped for tools-only targets.
Note that relying on the inclusion in those cases wouldn't be correct
anyway, as it might be a stale file (yet to be rebuilt) which would be
included, while during initial build, the file would be absent
altogether.
Fixes: 8d4c17a90b0a ("xen/build: silence make warnings about missing auto.conf*") Reported-by: Roger Pau Monné <roger.pau@citrix.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
master commit: d34e5fa2e8db19f23081f46a3e710bb122130691
master date: 2024-02-22 11:52:47 +0100
libxl: Disable relocating memory for qemu-xen in stubdomain too
According to comments (and experiments) qemu-xen cannot handle memory
reolcation done by hvmloader. The code was already disabled when running
qemu-xen in dom0 (see libxl__spawn_local_dm()), but it was missed when
adding qemu-xen support to stubdomain. Adjust libxl__spawn_stub_dm() to
be consistent in this regard.
Reported-by: Neowutran <xen@neowutran.ovh> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com> Reviewed-by: Jason Andryuk <jandryuk@gmail.com> Acked-by: Anthony PERARD <anthony.perard@citrix.com>
master commit: 97883aa269f6745a6ded232be3a855abb1297e0d
master date: 2024-02-22 11:48:22 +0100
Anthony PERARD [Tue, 27 Feb 2024 13:08:50 +0000 (14:08 +0100)]
build: Replace `which` with `command -v`
The `which` command is not standard, may not exist on the build host,
or may not behave as expected by the build system. It is recommended
to use `command -v` to find out if a command exist and have its path,
and it's part of a POSIX shell standard (at least, it seems to be
mandatory since IEEE Std 1003.1-2008, but was optional before).
Fixes: c8a8645f1efe ("xen/build: Automatically locate a suitable python interpreter") Fixes: 3b47bcdb6d38 ("xen/build: Use a distro version of figlet") Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> Tested-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: f93629b18b528a5ab1b1092949c5420069c7226c
master date: 2024-02-19 12:45:48 +0100
Jan Beulich [Tue, 27 Feb 2024 13:08:20 +0000 (14:08 +0100)]
x86/HVM: tidy state on hvmemul_map_linear_addr()'s error path
While in the vast majority of cases failure of the function will not
be followed by re-invocation with the same emulation context, a few
very specific insns - involving multiple independent writes, e.g. ENTER
and PUSHA - exist where this can happen. Since failure of the function
only signals to the caller that it ought to try an MMIO write instead,
such failure also cannot be assumed to result in wholesale failure of
emulation of the current insn. Instead we have to maintain internal
state such that another invocation of the function with the same
emulation context remains possible. To achieve that we need to reset MFN
slots after putting page references on the error path.
Note that all of this affects debugging code only, in causing an
assertion to trigger (higher up in the function). There's otherwise no
misbehavior - such a "leftover" slot would simply be overwritten by new
contents in a release build.
Also extend the related unmap() assertion, to further check for MFN 0.
Fixes: 8cbd4fb0b7ea ("x86/hvm: implement hvmemul_write() using real mappings") Reported-by: Manuel Andreas <manuel.andreas@tum.de> Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Paul Durrant <paul@xen.org>
master commit: e72f951df407bc3be82faac64d8733a270036ba1
master date: 2024-02-13 09:36:14 +0100
Petr Beneš [Tue, 27 Feb 2024 13:07:45 +0000 (14:07 +0100)]
x86/hvm: Fix fast singlestep state persistence
This patch addresses an issue where the fast singlestep setting would persist
despite xc_domain_debug_control being called with XEN_DOMCTL_DEBUG_OP_SINGLE_STEP_OFF.
Specifically, if fast singlestep was enabled in a VMI session and that session
stopped before the MTF trap occurred, the fast singlestep setting remained
active even though MTF itself was disabled. This led to a situation where, upon
starting a new VMI session, the first event to trigger an EPT violation would
cause the corresponding EPT event callback to be skipped due to the lingering
fast singlestep setting.
The fix ensures that the fast singlestep setting is properly reset when
disabling single step debugging operations.
Signed-off-by: Petr Beneš <w1benny@gmail.com> Reviewed-by: Tamas K Lengyel <tamas@tklengyel.com>
master commit: 897def94b56175ce569673a05909d2f223e1e749
master date: 2024-02-12 09:37:58 +0100
Roger Pau Monné [Tue, 27 Feb 2024 13:07:12 +0000 (14:07 +0100)]
amd-vi: fix IVMD memory type checks
The current code that parses the IVMD blocks is relaxed with regard to the
restriction that such unity regions should always fall into memory ranges
marked as reserved in the memory map.
However the type checks for the IVMD addresses are inverted, and as a result
IVMD ranges falling into RAM areas are accepted. Note that having such ranges
in the first place is a firmware bug, as IVMD should always fall into reserved
ranges.
Fixes: ed6c77ebf0c1 ('AMD/IOMMU: check / convert IVMD ranges for being / to be reserved') Reported-by: Ox <oxjo@proton.me> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Tested-by: oxjo <oxjo@proton.me> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 83afa313583019d9f159c122cecf867735d27ec5
master date: 2024-02-06 11:56:13 +0100
Sort doesn't work on columns VBD_OO, VBD_RD, VBD_WR and VBD_RSECT.
Fix by adjusting variables names in compare functions.
Bug fix only. No functional change.
Fixes: 91c3e3dc91d6 ("tools/xentop: Display '-' when stats are not available.") Signed-off-by: Cyril Rébert (zithro) <slack@rabbit.lu> Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
master commit: 29f17d837421f13c0e0010802de1b2d51d2ded4a
master date: 2024-02-05 17:58:23 +0000
Michal Orzel [Fri, 2 Feb 2024 07:04:07 +0000 (08:04 +0100)]
lib{fdt,elf}: move lib{fdt,elf}-temp.o and their deps to $(targets)
At the moment, trying to run xencov read/reset (calling SYSCTL_coverage_op
under the hood) results in a crash. This is due to a profiler trying to
access data in the .init.* sections (libfdt for Arm and libelf for x86)
that are stripped after boot. Normally, the build system compiles any
*.init.o file without COV_FLAGS. However, these two libraries are
handled differently as sections will be renamed to init after linking.
To override COV_FLAGS to empty for these libraries, lib{fdt,elf}.o were
added to nocov-y. This worked until e321576f4047 ("xen/build: start using
if_changed") that added lib{fdt,elf}-temp.o and their deps to extra-y.
This way, even though these objects appear as prerequisites of
lib{fdt,elf}.o and the settings should propagate to them, make can also
build them as a prerequisite of __build, in which case COV_FLAGS would
still have the unwanted flags. Fix it by switching to $(targets) instead.
Also, for libfdt, append libfdt.o to nocov-y only if CONFIG_OVERLAY_DTB
is not set. Otherwise, there is no section renaming and we should be able
to run the coverage.
Fixes: e321576f4047 ("xen/build: start using if_changed") Signed-off-by: Michal Orzel <michal.orzel@amd.com> Reviewed-by: Anthony PERARD <anthony.perard@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com>
master commit: 79519fcfa0605bbf19d8c02b979af3a2c8afed68
master date: 2024-01-23 12:02:44 +0100
Andrew Cooper [Fri, 2 Feb 2024 07:03:26 +0000 (08:03 +0100)]
x86/vmx: Disallow the use of inactivity states
Right now, vvmx will blindly copy L12's ACTIVITY_STATE into the L02 VMCS and
enter the vCPU. Luckily for us, nested-virt is explicitly unsupported for
security bugs.
The inactivity states are HLT, SHUTDOWN and WAIT-FOR-SIPI, and as noted by the
SDM in Vol3 27.7 "Special Features of VM Entry":
If VM entry ends with the logical processor in an inactive activity state,
the VM entry generates any special bus cycle that is normally generated when
that activity state is entered from the active state.
Also,
Some activity states unconditionally block certain events.
I.e. A VMEntry with ACTIVITY=SHUTDOWN will initiate a platform reset, while a
VMEntry with ACTIVITY=WAIT-FOR-SIPI will really block everything other than
SIPIs.
Both of these activity states are for the TXT ACM to use, not for regular
hypervisors, and Xen doesn't support dropping the HLT intercept either.
There are two paths in Xen which operate on ACTIVITY_STATE.
1) The vmx_{get,set}_nonreg_state() helpers for VM-Fork.
As regular VMs can't use any inactivity states, this is just duplicating
the 0 from construct_vmcs(). Retain the ability to query activity_state,
but crash the domain on any attempt to set an inactivity state.
2) Nested virt, because of ACTIVITY_STATE in vmcs_gstate_field[].
Explicitly hide the inactivity states in the guest's view of MSR_VMX_MISC,
and remove ACTIVITY_STATE from vmcs_gstate_field[].
In virtual_vmentry(), we should trigger a VMEntry failure for the use of
any inactivity states, but there's no support for that in the code at all
so leave a TODO for when we finally start working on nested-virt in
earnest.
Reported-by: Reima Ishii <ishiir@g.ecc.u-tokyo.ac.jp> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Tamas K Lengyel <tamas@tklengyel.com>
master commit: 3643bb53a05b7c8fbac072c63bef1538f2a6d0d2
master date: 2024-01-18 20:59:06 +0000
Andrew Cooper [Fri, 2 Feb 2024 07:02:51 +0000 (08:02 +0100)]
x86/vmx: Fix IRQ handling for EXIT_REASON_INIT
When receiving an INIT, a prior bugfix tried to ignore the INIT and continue
onwards.
Unfortunately it's not safe to return at that point in vmx_vmexit_handler().
Just out of context in the first hunk is a local_irqs_enabled() which is
depended-upon by the return-to-guest path, causing the following checklock
failure in debug builds:
(XEN) Error: INIT received - ignoring
(XEN) CHECKLOCK FAILURE: prev irqsafe: 0, curr irqsafe 1
(XEN) Xen BUG at common/spinlock.c:132
(XEN) ----[ Xen-4.19-unstable x86_64 debug=y Tainted: H ]----
...
(XEN) Xen call trace:
(XEN) [<ffff82d040238e10>] R check_lock+0xcd/0xe1
(XEN) [<ffff82d040238fe3>] F _spin_lock+0x1b/0x60
(XEN) [<ffff82d0402ed6a8>] F pt_update_irq+0x32/0x3bb
(XEN) [<ffff82d0402b9632>] F vmx_intr_assist+0x3b/0x51d
(XEN) [<ffff82d040206447>] F vmx_asm_vmexit_handler+0xf7/0x210
Luckily, this is benign in release builds. Accidentally having IRQs disabled
when trying to take an IRQs-on lock isn't a deadlock-vulnerable pattern.
Drop the problematic early return. In hindsight, it's wrong to skip other
normal VMExit steps.
Fixes: b1f11273d5a7 ("x86/vmx: Don't spuriously crash the domain when INIT is received") Reported-by: Reima ISHII <ishiir@g.ecc.u-tokyo.ac.jp> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: d1f8883aebe00f6a9632d77ab0cd5c6d02c9cbe4
master date: 2024-01-18 20:59:06 +0000
Roger Pau Monné [Fri, 2 Feb 2024 07:02:20 +0000 (08:02 +0100)]
x86/intel: ensure Global Performance Counter Control is setup correctly
When Architectural Performance Monitoring is available, the PERF_GLOBAL_CTRL
MSR contains per-counter enable bits that is ANDed with the enable bit in the
counter EVNTSEL MSR in order for a PMC counter to be enabled.
So far the watchdog code seems to have relied on the PERF_GLOBAL_CTRL enable
bits being set by default, but at least on some Intel Sapphire and Emerald
Rapids this is no longer the case, and Xen reports:
Testing NMI watchdog on all CPUs: 0 40 stuck
The first CPU on each package is started with PERF_GLOBAL_CTRL zeroed, so PMC0
doesn't start counting when the enable bit in EVNTSEL0 is set, due to the
relevant enable bit in PERF_GLOBAL_CTRL not being set.
Check and adjust PERF_GLOBAL_CTRL during CPU initialization so that all the
general-purpose PMCs are enabled. Doing so brings the state of the package-BSP
PERF_GLOBAL_CTRL in line with the rest of the CPUs on the system.
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com>
master commit: 6bdb965178bbb3fc50cd4418d4770a7789956e2c
master date: 2024-01-17 10:40:52 +0100
Roger Pau Monné [Fri, 2 Feb 2024 07:01:50 +0000 (08:01 +0100)]
CirrusCI: drop FreeBSD 12
Went EOL by the end of December 2023, and the pkg repos have been shut down.
Reported-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: c2ce3466472e9c9eda79f5dc98eb701bc6fdba20
master date: 2024-01-15 12:20:11 +0100
Roger Pau Monné [Fri, 2 Feb 2024 07:01:09 +0000 (08:01 +0100)]
x86/amd: Extend CPU erratum #1474 fix to more affected models
Erratum #1474 has now been extended to cover models from family 17h ranges
00-2Fh, so the errata now covers all the models released under Family
17h (Zen, Zen+ and Zen2).
Additionally extend the workaround to Family 18h (Hygon), since it's based on
the Zen architecture and very likely affected.
Rename all the zen2 related symbols to fam17, since the errata doesn't
exclusively affect Zen2 anymore.
Reported-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 23db507a01a4ec5259ec0ab43d296a41b1c326ba
master date: 2023-12-21 12:19:40 +0000
Andrew Cooper [Tue, 30 Jan 2024 13:38:38 +0000 (14:38 +0100)]
VT-d: Fix "else" vs "#endif" misplacement
In domain_pgd_maddr() the "#endif" is misplaced with respect to "else". This
generates incorrect logic when CONFIG_HVM is compiled out, as the "else" body
is executed unconditionally.
Rework the logic to use IS_ENABLED() instead of explicit #ifdef-ary, as it's
clearer to follow. This in turn involves adjusting p2m_get_pagetable() to
compile when CONFIG_HVM is disabled.
This is XSA-450 / CVE-2023-46840.
Fixes: 033ff90aa9c1 ("x86/P2M: p2m_{alloc,free}_ptp() and p2m_alloc_table() are HVM-only") Reported-by: Teddy Astie <teddy.astie@vates.tech> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: cc6ba68edf6dcd18c3865e7d7c0f1ed822796426
master date: 2024-01-30 14:29:15 +0100
Roger Pau Monné [Tue, 30 Jan 2024 13:37:39 +0000 (14:37 +0100)]
pci: fail device assignment if phantom functions cannot be assigned
The current behavior is that no error is reported if (some) phantom functions
fail to be assigned during device add or assignment, so the operation succeeds
even if some phantom functions are not correctly setup.
This can lead to devices possibly being successfully assigned to a domU while
some of the device phantom functions are still assigned to dom0. Even when the
device is assigned domIO before being assigned to a domU phantom functions
might fail to be assigned to domIO, and also fail to be assigned to the domU,
leaving them assigned to dom0.
Since the device can generate requests using the IDs of those phantom
functions, given the scenario above a device in such state would be in control
of a domU, but still capable of generating transactions that use a context ID
targeting dom0 owned memory.
Modify device assign in order to attempt to deassign the device if phantom
functions failed to be assigned.
Note that device addition is not modified in the same way, as in that case the
device is assigned to a trusted domain, and hence partial assign can lead to
device malfunction but not a security issue.
This is XSA-449 / CVE-2023-46839
Fixes: 4e9950dc1bd2 ('IOMMU: add phantom function support') Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: cb4ecb3cc17b02c2814bc817efd05f3f3ba33d1e
master date: 2024-01-30 14:28:01 +0100
Michal Orzel [Tue, 12 Dec 2023 13:51:20 +0000 (14:51 +0100)]
xen/arm: page: Avoid pointer overflow on cache clean & invalidate
On Arm32, after cleaning and invalidating the last dcache line of the top
domheap page i.e. VA = 0xfffff000 (as a result of flushing the page to
RAM), we end up adding the value of a dcache line size to the pointer
once again, which results in a pointer arithmetic overflow (with 64B line
size, operation 0xffffffc0 + 0x40 overflows to 0x0). Such behavior is
undefined and given the wide range of compiler versions we support, it is
difficult to determine what could happen in such scenario.
Modify clean_and_invalidate_dcache_va_range() as well as
clean_dcache_va_range() and invalidate_dcache_va_range() due to similarity
of handling to prevent pointer arithmetic overflow. Modify the loops to
use an additional variable to store the index of the next cacheline.
Add an assert to prevent passing a region that wraps around which is
illegal and would end up in a page fault anyway (region 0-2MB is
unmapped). Lastly, return early if size passed is 0.
Note that on Arm64, we don't have this problem given that the max VA
space we support is 48-bits.
Juergen Gross [Tue, 12 Dec 2023 13:50:10 +0000 (14:50 +0100)]
xen/sched: fix sched_move_domain()
Do cleanup in sched_move_domain() in a dedicated service function,
which is called either in error case with newly allocated data, or in
success case with the old data to be freed.
This will at once fix some subtle bugs which sneaked in due to
forgetting to overwrite some pointers in the error case.
Fixes: 70fadc41635b ("xen/cpupool: support moving domain between cpupools with different granularity") Reported-by: René Winther Højgaard <renewin@proton.me> Initial-fix-by: Jan Beulich <jbeulich@suse.com> Initial-fix-by: George Dunlap <george.dunlap@cloud.com> Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: George Dunlap <george.dunlap@cloud.com>
master commit: 23792cc0f22cff4e106d838b83aa9ae1cb6ffaf4
master date: 2023-12-07 13:37:25 +0000
Julien Grall [Tue, 12 Dec 2023 13:49:55 +0000 (14:49 +0100)]
Only compile the hypervisor with -Wdeclaration-after-statement
Right now, all tools and hypervisor will be complied with the option
-Wdeclaration-after-statement. While most of the code in the hypervisor
is controlled by us, for tools we may import external libraries.
The build will fail if one of them are using the construct we are
trying to prevent. This is the case when building against Python 3.12
and Yocto:
| In file included from /srv/storage/alex/yocto/build-virt/tmp/work/core2-64-poky-linux/xen-tools/4.17+stable/recipe-sysroot/usr/include/python3.12/Python.h:44,
| from xen/lowlevel/xc/xc.c:8:
| /srv/storage/alex/yocto/build-virt/tmp/work/core2-64-poky-linux/xen-tools/4.17+stable/recipe-sysroot/usr/include/python3.12/object.h: In function 'Py_SIZE':
| /srv/storage/alex/yocto/build-virt/tmp/work/core2-64-poky-linux/xen-tools/4.17+stable/recipe-sysroot/usr/include/python3.12/object.h:233:5: error: ISO C90 forbids mixed declarations and code [-Werror=declaration-after-statement]
| 233 | PyVarObject *var_ob = _PyVarObject_CAST(ob);
| | ^~~~~~~~~~~
| In file included from /srv/storage/alex/yocto/build-virt/tmp/work/core2-64-poky-linux/xen-tools/4.17+stable/recipe-sysroot/usr/include/python3.12/Python.h:53:
| /srv/storage/alex/yocto/build-virt/tmp/work/core2-64-poky-linux/xen-tools/4.17+stable/recipe-sysroot/usr/include/python3.12/cpython/longintrepr.h: In function '_PyLong_CompactValue':
| /srv/storage/alex/yocto/build-virt/tmp/work/core2-64-poky-linux/xen-tools/4.17+stable/recipe-sysroot/usr/include/python3.12/cpython/longintrepr.h:121:5: error: ISO C90 forbids mixed declarations and code [-Werror=declaration-after-statement]
| 121 | Py_ssize_t sign = 1 - (op->long_value.lv_tag & _PyLong_SIGN_MASK);
| | ^~~~~~~~~~
| cc1: all warnings being treated as errors
Looking at the tools directory, a fair few directory already add
-Wno-declaration-after-statement to inhibit the default behavior.
We have always build the hypervisor with the flag, so for now remove
only the flag for anything but the hypervisor. We can decide at later
time whether we want to relax.
Also remove the -Wno-declaration-after-statement in some subdirectory
as the flag is now unnecessary.
Part of the commit message was take from Alexander's first proposal:
Link: https://lore.kernel.org/xen-devel/20231128174729.3880113-1-alex@linutronix.de/ Reported-by: Alexander Kanavin <alex@linutronix.de> Acked-by: Anthony PERARD <anthony.perard@citrix.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com> Tested-by: Jason Andryuk <jandryuk@gmail.com> Signed-off-by: Julien Grall <jgrall@amazon.com>
xen/hypervisor: Don't use cc-option-add for -Wdeclaration-after-statement
Per Andrew's comment in [1] all the compilers we support should
recognize the flag.
Roger Pau Monné [Tue, 12 Dec 2023 13:47:02 +0000 (14:47 +0100)]
x86/x2apic: introduce a mixed physical/cluster mode
The current implementation of x2APIC requires to either use Cluster Logical or
Physical mode for all interrupts. However the selection of Physical vs Logical
is not done at APIC setup, an APIC can be addressed both in Physical or Logical
destination modes concurrently.
Introduce a new x2APIC mode called Mixed, which uses Logical Cluster mode for
IPIs, and Physical mode for external interrupts, thus attempting to use the
best method for each interrupt type.
Using Physical mode for external interrupts allows more vectors to be used, and
interrupt balancing to be more accurate.
Using Logical Cluster mode for IPIs allows fewer accesses to the ICR register
when sending those, as multiple CPUs can be targeted with a single ICR register
write.
A simple test calling flush_tlb_all() 10000 times on a tight loop on AMD EPYC
9754 with 512 CPUs gives the following figures in nano seconds:
So Mixed has no difference when compared to Cluster mode, and Physical mode is
248% slower when compared to either Mixed or Cluster modes with a 95%
confidence.
Note that Xen uses Cluster mode by default, and hence is already using the
fastest way for IPI delivery at the cost of reducing the amount of vectors
available system-wide.
Make the newly introduced mode the default one.
Note the printing of the APIC addressing mode done in connect_bsp_APIC() has
been removed, as with the newly introduced mixed mode this would require more
fine grained printing, or else would be incorrect. The addressing mode can
already be derived from the APIC driver in use, which is printed by different
helpers.
Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Henry Wang <Henry.Wang@arm.com>
master commit: e3c409d59ac87ccdf97b8c7708c81efa8069cb31
master date: 2023-11-07 09:59:48 +0000
If rangeset_new() fails, err would not be set to an appropriate error
code. Set it to -ENOMEM.
Fixes: 580c458699e3 ("xen/domain: Call arch_domain_create() as early as possible in domain_create()") Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: ff1178062094837d55ef342070e58316c43a54c9
master date: 2023-12-05 10:00:51 +0100
Juergen Gross [Wed, 6 Dec 2023 09:49:29 +0000 (10:49 +0100)]
xen/sched: fix adding offline cpu to cpupool
Trying to add an offline cpu to a cpupool can crash the hypervisor,
as the probably non-existing percpu area of the cpu is accessed before
the availability of the cpu is being tested. This can happen in case
the cpupool's granularity is "core" or "socket".
Fix that by testing the cpu to be online.
Fixes: cb563d7665f2 ("xen/sched: support core scheduling for moving cpus to/from cpupools") Reported-by: René Winther Højgaard <renewin@proton.me> Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 06e8d65d33896aa90f5b6d9b2bce7f11433b33c9
master date: 2023-12-05 09:57:38 +0100
Jan Beulich [Wed, 6 Dec 2023 09:48:56 +0000 (10:48 +0100)]
x86emul: avoid triggering event related assertions
The assertion at the end of x86_emulate_wrapper() as well as the ones
in x86_emul_{hw_exception,pagefault}() can trigger if we ignore
X86EMUL_EXCEPTION coming back from certain hook functions. Squash
exceptions when merely probing MSRs, plus on SWAPGS'es "best effort"
error handling path.
In adjust_bnd() add another assertion after the read_xcr(0, ...)
invocation, paralleling the one in x86emul_get_fpu() - XCR0 reads should
never fault when XSAVE is (implicitly) known to be available.
Also update the respective comment in x86_emulate_wrapper().
Fixes: 14a6be89ec04 ("x86emul: correct EFLAGS.TF handling") Fixes: cb2626c75813 ("x86emul: conditionally clear BNDn for branches") Fixes: 6eb43fcf8a0b ("x86emul: support SWAPGS") Reported-by: AFL Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 787d11c5aaf4d3411d4658cff137cd49b0bd951b
master date: 2023-12-05 09:57:05 +0100
Both Intel and AMD manuals agree that in x2APIC mode, the APIC LDR and ID
registers are derivable from each other through a fixed formula.
Xen uses that formula, but applies it to vCPU IDs (which are sequential)
rather than x2APIC IDs (which are not, at the moment). As I understand it,
this is an attempt to tightly pack vCPUs into clusters so each cluster has
16 vCPUs rather than 8, but this is a spec violation.
This patch fixes the implementation so we follow the x2APIC spec for new
VMs, while preserving the behaviour (buggy or fixed) for migrated-in VMs.
While touching that area, remove the existing printk statement in
vlapic_load_fixup() (as the checks it performed didn't make sense in x2APIC
mode and wouldn't affect the outcome) and put another printk as an else
branch so we get warnings trying to load nonsensical LDR values we don't
know about.
Fixes: f9e0cccf7b35 ("x86/HVM: fix ID handling of x2APIC emulation") Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 90309854fd2440fb08b4c808f47d7670ba0d250d
master date: 2023-11-29 10:05:55 +0100
Roger Pau Monné [Wed, 6 Dec 2023 09:46:47 +0000 (10:46 +0100)]
livepatch: do not use .livepatch.funcs section to store internal state
Currently the livepatch logic inside of Xen will use fields of struct
livepatch_func in order to cache internal state of patched functions. Note
this is a field that is part of the payload, and is loaded as an ELF section
(.livepatch.funcs), taking into account the SHF_* flags in the section
header.
The flags for the .livepatch.funcs section, as set by livepatch-build-tools,
are SHF_ALLOC, which leads to its contents (the array of livepatch_func
structures) being placed in read-only memory:
This previously went unnoticed, as all writes to the fields of livepatch_func
happen in the critical region that had WP disabled in CR0. After 8676092a0f16
however WP is no longer toggled in CR0 for patch application, and only the
hypervisor .text mappings are made write-accessible. That leads to the
following page fault when attempting to apply a livepatch:
----[ Xen-4.19-unstable x86_64 debug=y Tainted: C ]----
CPU: 4
RIP: e008:[<ffff82d040221e81>] common/livepatch.c#apply_payload+0x45/0x1e1
[...]
Xen call trace:
[<ffff82d040221e81>] R common/livepatch.c#apply_payload+0x45/0x1e1
[<ffff82d0402235b2>] F check_for_livepatch_work+0x385/0xaa5
[<ffff82d04032508f>] F arch/x86/domain.c#idle_loop+0x92/0xee
****************************************
Panic on CPU 4:
FATAL PAGE FAULT
[error_code=0003]
Faulting linear address: ffff82d040625079
****************************************
Fix this by moving the internal Xen function patching state out of
livepatch_func into an area not allocated as part of the ELF payload. While
there also constify the array of livepatch_func structures in order to prevent
further surprises.
Note there's still one field (old_addr) that gets set during livepatch load. I
consider this fine since the field is read-only after load, and at the point
the field gets set the underlying mapping hasn't been made read-only yet.
Fixes: 8676092a0f16 ('x86/livepatch: Fix livepatch application when CET is active') Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Ross Lagerwall <ross.lagerwall@citrix.com>
xen/livepatch: fix livepatch tests
The current set of in-tree livepatch tests in xen/test/livepatch started
failing after the constify of the payload funcs array, and the movement of the
status data into a separate array.
Fix the tests so they respect the constness of the funcs array and also make
use of the new location of the per-func state data.
Fixes: 82182ad7b46e ('livepatch: do not use .livepatch.funcs section to store internal state') Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Ross Lagerwall <ross.lagerwall@citrix.com>
master commit: 82182ad7b46e0f7a3856bb12c7a9bf2e2a4570bc
master date: 2023-11-27 15:16:01 +0100
master commit: 902377b690f42ddf44ae91c4b0751d597f1cd694
master date: 2023-11-29 10:46:42 +0000
Frediano Ziglio [Wed, 6 Dec 2023 09:46:01 +0000 (10:46 +0100)]
x86/mem_sharing: Release domain if we are not able to enable memory sharing
In case it's not possible to enable memory sharing (mem_sharing_control
fails) we just return the error code without releasing the domain
acquired some lines above by rcu_lock_live_remote_domain_by_id().
Fixes: 72f8d45d69b8 ("x86/mem_sharing: enable mem_sharing on first memop") Signed-off-by: Frediano Ziglio <frediano.ziglio@cloud.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Tamas K Lengyel <tamas@tklengyel.com>
master commit: fbcec32d6d3ea0ac329301925b317478316209ed
master date: 2023-11-27 12:06:13 +0000
Juergen Gross [Thu, 23 Nov 2023 11:24:12 +0000 (12:24 +0100)]
xen/sched: fix sched_move_domain()
When moving a domain out of a cpupool running with the credit2
scheduler and having multiple run-queues, the following ASSERT() can
be observed:
(XEN) Xen call trace:
(XEN) [<ffff82d04023a700>] R credit2.c#csched2_unit_remove+0xe3/0xe7
(XEN) [<ffff82d040246adb>] S sched_move_domain+0x2f3/0x5b1
(XEN) [<ffff82d040234cf7>] S cpupool.c#cpupool_move_domain_locked+0x1d/0x3b
(XEN) [<ffff82d040236025>] S cpupool_move_domain+0x24/0x35
(XEN) [<ffff82d040206513>] S domain_kill+0xa5/0x116
(XEN) [<ffff82d040232b12>] S do_domctl+0xe5f/0x1951
(XEN) [<ffff82d0402276ba>] S timer.c#timer_lock+0x69/0x143
(XEN) [<ffff82d0402dc71b>] S pv_hypercall+0x44e/0x4a9
(XEN) [<ffff82d0402012b7>] S lstar_enter+0x137/0x140
(XEN)
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 1:
(XEN) Assertion 'svc->rqd == c2rqd(sched_unit_master(unit))' failed at common/sched/credit2.c:1159
(XEN) ****************************************
This is happening as sched_move_domain() is setting a different cpu
for a scheduling unit without telling the scheduler. When this unit is
removed from the scheduler, the ASSERT() will trigger.
In non-debug builds the result is usually a clobbered pointer, leading
to another crash a short time later.
Fix that by swapping the two involved actions (setting another cpu and
removing the unit from the scheduler).
Roger Pau Monné [Tue, 14 Nov 2023 13:01:33 +0000 (14:01 +0100)]
x86/i8259: do not assume interrupts always target CPU0
Sporadically we have seen the following during AP bringup on AMD platforms
only:
microcode: CPU59 updated from revision 0x830107a to 0x830107a, date = 2023-05-17
microcode: CPU60 updated from revision 0x830104d to 0x830107a, date = 2023-05-17
CPU60: No irq handler for vector 27 (IRQ -2147483648)
microcode: CPU61 updated from revision 0x830107a to 0x830107a, date = 2023-05-17
This is similar to the issue raised on Linux commit 36e9e1eab777e, where they
observed i8259 (active) vectors getting delivered to CPUs different than 0.
On AMD or Hygon platforms adjust the target CPU mask of i8259 interrupt
descriptors to contain all possible CPUs, so that APs will reserve the vector
at startup if any legacy IRQ is still delivered through the i8259. Note that
if the IO-APIC takes over those interrupt descriptors the CPU mask will be
reset.
Spurious i8259 interrupt vectors however (IRQ7 and IRQ15) can be injected even
when all i8259 pins are masked, and hence would need to be handled on all CPUs.
Continue to reserve PIC vectors on CPU0 only, but do check for such spurious
interrupts on all CPUs if the vendor is AMD or Hygon. Note that once the
vectors get used by devices detecting PIC spurious interrupts will no longer be
possible, however the device driver should be able to cope with spurious
interrupts. Such PIC spurious interrupts occurring when the vector is in use
by a local APIC routed source will lead to an extra EOI, which might
unintentionally clear a different vector from ISR. Note this is already the
current behavior, so assume it's infrequent enough to not cause real issues.
Finally, adjust the printed message to display the CPU where the spurious
interrupt has been received, so it looks like:
microcode: CPU1 updated from revision 0x830107a to 0x830107a, date = 2023-05-17
cpu1: spurious 8259A interrupt: IRQ7
microcode: CPU2 updated from revision 0x830104d to 0x830107a, date = 2023-05-17
Amends: 3fba06ba9f8b ('x86/IRQ: re-use legacy vector ranges on APs') Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 87f37449d586b4d407b75235bb0a171e018e25ec
master date: 2023-11-02 10:50:59 +0100
Roger Pau Monné [Tue, 14 Nov 2023 13:01:07 +0000 (14:01 +0100)]
x86/x2apic: remove usage of ACPI_FADT_APIC_CLUSTER
The ACPI FADT APIC_CLUSTER flag mandates that when the interrupt delivery is
Logical mode APIC must be configured for Cluster destination model. However in
apic_x2apic_probe() such flag is incorrectly used to gate whether Physical mode
can be used.
Since Xen when in x2APIC mode only uses Logical mode together with Cluster
model completely remove checking for ACPI_FADT_APIC_CLUSTER, as Xen always
fulfills the requirement signaled by the flag.
Fixes: eb40ae41b658 ('x86/Kconfig: add option for default x2APIC destination mode') Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 26a449ce32cef33f2cb50602be19fcc0c4223ba9
master date: 2023-11-02 10:50:26 +0100
David Woodhouse [Tue, 14 Nov 2023 13:00:37 +0000 (14:00 +0100)]
x86/pv-shim: fix grant table operations for 32-bit guests
When switching to call the shim functions from the normal handlers, the
compat_grant_table_op() function was omitted, leaving it calling the
real grant table operations in !PV_SHIM_EXCLUSIVE builds. This leaves a
32-bit shim guest failing to set up its real grant table with the parent
hypervisor.
Fixes: e7db635f4428 ("x86/pv-shim: Don't modify the hypercall table") Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 93ec30bc545f15760039c23ee4b97b80c0b3b3b3
master date: 2023-10-31 16:10:14 +0000
Tamas K Lengyel [Tue, 14 Nov 2023 13:00:20 +0000 (14:00 +0100)]
x86/mem_sharing: add missing m2p entry when mapping shared_info page
When mapping in the shared_info page to a fork the m2p entry wasn't set
resulting in the shared_info being reset even when the fork reset was called
with only reset_state and not reset_memory. This results in an extra
unnecessary TLB flush.
Fixes: 1a0000ac775 ("mem_sharing: map shared_info page to same gfn during fork") Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 23eb39acf011ef9bbe02ed4619c55f208fbcd39b
master date: 2023-10-31 16:10:14 +0000
Jan Beulich [Tue, 14 Nov 2023 12:58:18 +0000 (13:58 +0100)]
x86: support data operand independent timing mode
[1] specifies a long list of instructions which are intended to exhibit
timing behavior independent of the data they operate on. On certain
hardware this independence is optional, controlled by a bit in a new
MSR. Provide a command line option to control the mode Xen and its
guests are to operate in, with a build time control over the default.
Longer term we may want to allow guests to control this.
Since Arm64 supposedly also has such a control, put command line option
and Kconfig control in common files.
Roger Pau Monné [Tue, 14 Nov 2023 12:56:39 +0000 (13:56 +0100)]
iommu/vt-d: fix SAGAW capability parsing
SAGAW is a bitmap field, with bits 1, 2 and 3 signaling support for 3, 4 and 5
level page tables respectively. According to the Intel VT-d specification, an
IOMMU can report multiple SAGAW bits being set.
Commit 859d11b27912 claims to replace the open-coded find_first_set_bit(), but
it's actually replacing an open coded implementation to find the last set bit.
The change forces the used AGAW to the lowest supported by the IOMMU instead of
the highest one between 1 and 2.
Restore the previous SAGAW parsing by using fls() instead of
find_first_set_bit(), in order to get the highest (supported) AGAW to be used.
However there's a caveat related to the value the AW context entry field must
be set to when using passthrough mode:
"When the Translation-type (TT) field indicates pass-through processing (10b),
this field must be programmed to indicate the largest AGAW value supported by
hardware." [0]
Newer Intel IOMMU implementations support 5 level page tables for the IOMMU,
and signal such support in SAGAW bit 3.
Enabling 5 level paging support (AGAW 3) is too risky at this point in the Xen
4.18 release, so instead put a bodge to unconditionally disable passthough
mode if SAGAW has any bits greater than 2 set. Ignore bit 0; it's reserved in
current specifications, but had a meaning in the past and is unlikely to be
reused in the future.
Note the message about unhandled SAGAW bits being set is printed
unconditionally, regardless of whether passthrough mode is enabled. This is
done in order to easily notice IOMMU implementations with not yet supported
SAGAW values.
Roger Pau Monné [Tue, 14 Nov 2023 12:56:13 +0000 (13:56 +0100)]
iommu: fix quarantine mode command line documentation
With the addition of per-device quarantine page tables the sink page is now
exclusive for each device, and thus writable. Update the documentation to
reflect the current implementation.
Fixes: 14dd241aad8a ('IOMMU/x86: use per-device page tables for quarantining') Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 94a5127ebeb4a005f128150909ca78bfea50206a
master date: 2023-10-19 21:52:52 +0100
Roger Pau Monné [Tue, 14 Nov 2023 12:55:23 +0000 (13:55 +0100)]
x86/pvh: fix identity mapping of low 1MB
The mapping of memory regions below the 1MB mark was all done by the PVH dom0
builder code, causing the region to be avoided by the arch specific IOMMU
hardware domain initialization code. That lead to the IOMMU being enabled
without reserved regions in the low 1MB identity mapped in the p2m for PVH
hardware domains. Firmware which happens to be missing RMRR/IVMD ranges
describing E820 reserved regions in the low 1MB would transiently trigger IOMMU
faults until the p2m is populated by the PVH dom0 builder:
Those errors have been observed on the osstest pinot{0,1} boxes (AMD Fam15h
Opteron(tm) Processor 3350 HE).
Rely on the IOMMU arch init code to create any identity mappings for reserved
regions in the low 1MB range (like it already does for reserved regions
elsewhere), and leave the mapping of any holes to be performed by the dom0
builder code.
Fixes: 6b4f6a31ace1 ('x86/PVH: de-duplicate mappings for first Mb of Dom0 memory') Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 4bb882fe6e4430782101fe06379649df1bbd458a
master date: 2023-10-19 09:52:43 +0200
This is an AMD feature to reduce the IBRS handling overhead. Once enabled,
processes running at CPL=0 are automatically IBRS-protected even if
SPEC_CTRL.IBRS is not set. Furthermore, the RAS/RSB is cleared on VMEXIT.
The feature is exposed in CPUID and toggled in EFER.
Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 8347d6bb29bfd0c3b5acdc078574a8643c5a5637
master date: 2023-05-30 18:24:07 +0100
tools/pygrub: Fix pygrub's --entry flag for python3
string.atoi() has been deprecated since Python 2.0, has a big scary warning
in the python2.7 docs and is absent from python3 altogether. int() does the
same thing and is compatible with both.
See https://docs.python.org/2/library/string.html#string.atoi:
Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 40387f62061c4b9c780cda78b4ac0e29d478f648
master date: 2023-10-18 15:44:31 +0100
George Dunlap [Tue, 14 Nov 2023 12:52:27 +0000 (13:52 +0100)]
cxenstored: wait until after reset to notify dom0less domains
Commit fc2b57c9a ("xenstored: send an evtchn notification on
introduce_domain") introduced the sending of an event channel to the
guest when first introduced, so that dom0less domains waiting for the
connection would know that xenstore was ready to use.
Unfortunately, it was introduced in introduce_domain(), which 1) is
called by other functions, where such functionality is unneeded, and
2) after the main XS_INTRODUCE call, calls domain_conn_reset(). This
introduces a race condition, whereby if xenstored is delayed, a domain
can wake up, send messages to the buffer, only to have them deleted by
xenstore before finishing its processing of the XS_INTRODUCE message.
Move the connect-and-notfy call into do_introduce() instead, after the
domain_conn_rest(); predicated on the state being in the
XENSTORE_RECONNECT state.
(We don't need to check for "restoring", since that value is always
passed as "false" from do_domain_introduce()).
Also take the opportunity to add a missing wmb barrier after resetting
the indexes of the ring in domain_conn_reset.
This change will also remove an extra event channel notification for
dom0 (because the notification is now done by do_introduce which is not
called for dom0.) The extra dom0 event channel notification was only
introduced by fc2b57c9a and was never present before. It is not needed
because dom0 is the one to tell xenstored the connection parameters, so
dom0 has to know that the ring page is setup correctly by the time
xenstored starts looking at it. It is dom0 that performs the ring page
init.
Michal Orzel [Tue, 14 Nov 2023 12:52:01 +0000 (13:52 +0100)]
x86: Clarify that only 5 hypercall parameters are supported
The x86 hypercall ABI really used to have 6-argument hypercalls. V4V, the
downstream predecessor to Argo did take 6th args.
However, the 6th arg being %ebp in the 32bit ABI makes it unusable in
practice, because that's the frame pointer in builds with frame pointers
enabled. Therefore Argo was altered to being a 5-arg hypercall when it was
upstreamed.
c/s 2f531c122e95 ("x86: limit number of hypercall parameters to 5") removed
the ability for hypercalls to take 6 arguments.
Update the documentation to match reality.
Signed-off-by: Michal Orzel <michal.orzel@amd.com> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
master commit: c035151902689aa5a3765aeb16fa52755917b9ca
master date: 2023-10-10 10:03:49 +0100
Roger Pau Monné [Tue, 14 Nov 2023 12:50:35 +0000 (13:50 +0100)]
x86/amd: do not expose HWCR.TscFreqSel to guests
OpenBSD 7.3 will unconditionally access HWCR if the TSC is reported as
Invariant, and it will then attempt to also unconditionally access PSTATE0 if
HWCR.TscFreqSel is set (currently the case on Xen).
The motivation for exposing HWCR.TscFreqSel was to avoid warning messages from
Linux. It has been agreed that Linux should be changed instead to not
complaint about missing HWCR.TscFreqSel when running virtualized.
The relation between HWCR.TscFreqSel and PSTATE0 is not clearly written down in
the PPR, but it's natural for OSes to attempt to fetch the P0 frequency if the
TSC is stated to increment at the P0 frequency.
Exposing PSTATEn (PSTATE0 at least) with all zeroes is not a suitable solution
because the PstateEn bit is read-write, and OSes could legitimately attempt to
set PstateEn=1 which Xen couldn't handle.
Furthermore, the TscFreqSel bit is model specific and was never safe to expose
like this in the first place. At a minimum it should have had a toolstack
adjustment to know not to migrate such a VM.
Therefore, simply remove the bit. Note the HWCR itself is an architectural
register, and does need to be accessible by the guest. Since HWCR contains
both architectural and non-architectural bits, going forward care must be taken
to assert the exposed value is correct on newer CPU families.
Reported-by: Solène Rapenne <solene@openbsd.org> Link: https://github.com/QubesOS/qubes-issues/issues/8502 Fixes: 14b95b3b8546 ('x86/AMD: expose HWCR.TscFreqSel to guests') Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: e4ca4e261da3fdddd541c3a9842b1e9e2ad00525
master date: 2023-09-18 15:07:49 +0200
Andrew Cooper [Thu, 26 Oct 2023 13:37:38 +0000 (14:37 +0100)]
x86/spec-ctrl: Remove conditional IRQs-on-ness for INT $0x80/0x82 paths
Before speculation defences, some paths in Xen could genuinely get away with
being IRQs-on at entry. But XPTI invalidated this property on most paths, and
attempting to maintain it on the remaining paths was a mistake.
Fast forward, and DO_SPEC_CTRL_COND_IBPB (protection for AMD BTC/SRSO) is not
IRQ-safe, running with IRQs enabled in some cases. The other actions taken on
these paths happen to be IRQ-safe.
Make entry_int82() and int80_direct_trap() unconditionally Interrupt Gates
rather than Trap Gates. Remove the conditional re-adjustment of
int80_direct_trap() in smp_prepare_cpus(), and have entry_int82() explicitly
enable interrupts when safe to do so.
In smp_prepare_cpus(), with the conditional re-adjustment removed, the
clearing of pv_cr3 is the only remaining action gated on XPTI, and it is out
of place anyway, repeating work already done by smp_prepare_boot_cpu(). Drop
the entire if() condition to avoid leaving an incorrect vestigial remnant.
Also drop comments which make incorrect statements about when its safe to
enable interrupts.
This is XSA-446 / CVE-2023-46836
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
(cherry picked from commit a48bb129f1b9ff55c22cf6d2b589247c8ba3b10e)
Roger Pau Monne [Wed, 11 Oct 2023 11:14:21 +0000 (13:14 +0200)]
iommu/amd-vi: use correct level for quarantine domain page tables
The current setup of the quarantine page tables assumes that the quarantine
domain (dom_io) has been initialized with an address width of
DEFAULT_DOMAIN_ADDRESS_WIDTH (48).
However dom_io being a PV domain gets the AMD-Vi IOMMU page tables levels based
on the maximum (hot pluggable) RAM address, and hence on systems with no RAM
above the 512GB mark only 3 page-table levels are configured in the IOMMU.
On systems without RAM above the 512GB boundary amd_iommu_quarantine_init()
will setup page tables for the scratch page with 4 levels, while the IOMMU will
be configured to use 3 levels only. The page destined to be used as level 1,
and to contain a directory of PTEs ends up being the address in a PTE itself,
and thus level 1 page becomes the leaf page. Without the level mismatch it's
level 0 page that should be the leaf page instead.
The level 1 page won't be used as such, and hence it's not possible to use it
to gain access to other memory on the system. However that page is not cleared
in amd_iommu_quarantine_init() as part of re-initialization of the device
quarantine page tables, and hence data on the level 1 page can be leaked
between device usages.
Fix this by making sure the paging levels setup by amd_iommu_quarantine_init()
match the number configured on the IOMMUs.
Note that IVMD regions are not affected by this issue, as those areas are
mapped taking the configured paging levels into account.
This is XSA-445 / CVE-2023-46835
Fixes: ea38867831da ('x86 / iommu: set up a scratch page in the quarantine domain') Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit fe1e4668b373ec4c1e5602e75905a9fa8cc2be3f)
Andrew Cooper [Tue, 26 Sep 2023 19:03:36 +0000 (20:03 +0100)]
x86/pv: Correct the auditing of guest breakpoint addresses
The use of access_ok() is buggy, because it permits access to the compat
translation area. 64bit PV guests don't use the XLAT area, but on AMD
hardware, the DBEXT feature allows a breakpoint to match up to a 4G aligned
region, allowing the breakpoint to reach outside of the XLAT area.
Prior to c/s cda16c1bb223 ("x86: mirror compat argument translation area for
32-bit PV"), the live GDT was within 4G of the XLAT area.
All together, this allowed a malicious 64bit PV guest on AMD hardware to place
a breakpoint over the live GDT, and trigger a #DB livelock (CVE-2015-8104).
Introduce breakpoint_addr_ok() and explain why __addr_ok() happens to be an
appropriate check in this case.
For Xen 4.14 and later, this is a latent bug because the XLAT area has moved
to be on its own with nothing interesting adjacent. For Xen 4.13 and older on
AMD hardware, this fixes a PV-trigger-able DoS.
This is part of XSA-444 / CVE-2023-34328.
Fixes: 65e355490817 ("x86/PV: support data breakpoint extension registers") Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit dc9d9aa62ddeb14abd5672690d30789829f58f7e)
Andrew Cooper [Tue, 26 Sep 2023 19:03:36 +0000 (20:03 +0100)]
x86/svm: Fix asymmetry with AMD DR MASK context switching
The handling of MSR_DR{0..3}_MASK is asymmetric between PV and HVM guests.
HVM guests context switch in based on the guest view of DBEXT, whereas PV
guest switch in base on the host capability. Both guest types leave the
context dirty for the next vCPU.
This leads to the following issue:
* PV or HVM vCPU has debugging active (%dr7 + mask)
* Switch out deactivates %dr7 but leaves other state stale in hardware
* HVM vCPU with debugging activate but can't see DBEXT is switched in
* Switch in loads %dr7 but leaves the mask MSRs alone
Now, the HVM vCPU is operating in the context of the prior vCPU's mask MSR,
and furthermore in a case where it genuinely expects there to be no masking
MSRs.
As a stopgap, adjust the HVM path to switch in/out the masks based on host
capabilities rather than guest visibility (i.e. like the PV path). Adjustment
of the of the intercepts still needs to be dependent on the guest visibility
of DBEXT.
This is part of XSA-444 / CVE-2023-34327
Fixes: c097f54912d3 ("x86/SVM: support data breakpoint extension registers") Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
(cherry picked from commit 5d54282f984bb9a7a65b3d12208584f9fdf1c8e1)
libxl: limit bootloader execution in restricted mode
Introduce a timeout for bootloader execution when running in restricted mode.
Allow overwriting the default time out with an environment provided value.
This is part of XSA-443 / CVE-2023-34325
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
(cherry picked from commit 9c114178ffd700112e91f5ec66cf5151b9c9a8cc)
libxl: add support for running bootloader in restricted mode
Much like the device model depriv mode, add the same kind of support for the
bootloader. Such feature allows passing a UID as a parameter for the
bootloader to run as, together with the bootloader itself taking the necessary
actions to isolate.
Note that the user to run the bootloader as must have the right permissions to
access the guest disk image (in read mode only), and that the bootloader will
be run in non-interactive mode when restricted.
If enabled bootloader restrict mode will attempt to re-use the user(s) from the
QEMU depriv implementation if no user is provided on the configuration file or
the environment. See docs/features/qemu-deprivilege.pandoc for more
information about how to setup those users.
Bootloader restrict mode is not enabled by default as it requires certain
setup to be done first (setup of the user(s) to use in restrict mode).
This is part of XSA-443 / CVE-2023-34325
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
(cherry picked from commit 1f762642d2cad1a40634e3280361928109d902f1)
Introduce a --runas=<uid> flag to deprivilege pygrub on Linux and *BSDs. It
also implicitly creates a chroot env where it drops a deprivileged forked
process. The chroot itself is cleaned up at the end.
If the --runas arg is present, then pygrub forks, leaving the child to
deprivilege itself, and waiting for it to complete. When the child exists,
the parent performs cleanup and exits with the same error code.
This is roughly what the child does:
1. Initialize libfsimage (this loads every .so in memory so the chroot
can avoid bind-mounting /{,usr}/lib*
2. Create a temporary empty chroot directory
3. Mount tmpfs in it
4. Bind mount the disk inside, because libfsimage expects a path, not a
file descriptor.
5. Remount the root tmpfs to be stricter (ro,nosuid,nodev)
6. Set RLIMIT_FSIZE to a sensibly high amount (128 MiB)
7. Depriv gid, groups and uid
With this scheme in place, the "output" files are writable (up to
RLIMIT_FSIZE octets) and the exposed filesystem is immutable and contains
the single only file we can't easily get rid of (the disk).
If running on Linux, the child process also unshares mount, IPC, and
network namespaces before dropping its privileges.
tools/libfsimage: Export a new function to preload all plugins
This is work required in order to let pygrub operate in highly deprivileged
chroot mode. This patch adds a function that preloads every plugin, hence
ensuring that a on function exit, every shared library is loaded in memory.
The new "init" function is supposed to be used before depriv, but that's
fine because it's not acting on untrusted data.
This patch allows pygrub to get ahold of every RW file descriptor it needs
early on. A later patch will clamp the filesystem it can access so it can't
obtain any others.
There's a hypercall being issued in order to determine whether PV64 is
supported, but since Xen 4.3 that's strictly true so it's not required.
Plus, this way we can avoid mapping the privcmd interface altogether in the
depriv pygrub.
This is part of XSA-443 / CVE-2023-34325
Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(cherry picked from commit f4b504c6170c446e61055cbd388ae4e832a9deca)
libfsimage/xfs: Sanity-check the superblock during mounts
Sanity-check the XFS superblock for wellformedness at the mount handler.
This forces pygrub to abort parsing a potentially malformed filesystem and
ensures the invariants assumed throughout the rest of the code hold.
Also, derive parameters from previously sanitized parameters where possible
(rather than reading them off the superblock)
The code doesn't try to avoid overflowing the end of the disk, because
that's an unlikely and benign error. Parameters used in calculations of
xfs_daddr_t (like the root inode index) aren't in critical need of being
sanitized.
The sanitization of agblklog is basically checking that no obvious
overflows happen on agblklog, and then ensuring agblocks is contained in
the range (2^(sb_agblklog-1), 2^sb_agblklog].
This is part of XSA-443 / CVE-2023-34325
Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit 620500dd1baf33347dfde5e7fde7cf7fe347da5c)
Roger Pau Monne [Tue, 13 Jun 2023 13:01:05 +0000 (15:01 +0200)]
iommu/amd-vi: flush IOMMU TLB when flushing the DTE
The caching invalidation guidelines from the AMD-Vi specification (48882—Rev
3.07-PUB—Oct 2022) seem to be misleading on some hardware, as devices will
malfunction (see stale DMA mappings) if some fields of the DTE are updated but
the IOMMU TLB is not flushed. This has been observed in practice on AMD
systems. Due to the lack of guidance from the currently published
specification this patch aims to increase the flushing done in order to prevent
device malfunction.
In order to fix, issue an INVALIDATE_IOMMU_PAGES command from
amd_iommu_flush_device(), flushing all the address space. Note this requires
callers to be adjusted in order to pass the DomID on the DTE previous to the
modification.
Some call sites don't provide a valid DomID to amd_iommu_flush_device() in
order to avoid the flush. That's because the device had address translations
disabled and hence the previous DomID on the DTE is not valid. Note the
current logic relies on the entity disabling address translations to also flush
the TLB of the in use DomID.
Device I/O TLB flushing when ATS are enabled is not covered by the current
change, as ATS usage is not security supported.
This is XSA-442 / CVE-2023-34326
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit 5fc98b97084a46884acef9320e643faf40d42212)
The function domain_entry_fix() will be initially called to check if the
quota is correct before attempt to commit any nodes. So it would be
possible that accounting is temporarily negative. This is the case
in the following sequence:
1) Create 50 nodes
2) Start two transactions
3) Delete all the nodes in each transaction
4) Commit the two transactions
Because the first transaction will have succeed and updated the
accounting, there is no guarantee that 'd->nbentry + num' will still
be above 0. So the assert() would be triggered.
The assert() was introduced in dbef1f748289 ("tools/xenstore: simplify
and fix per domain node accounting") with the assumption that the
value can't be negative. As this is not true revert to the original
check but restricted to the path where we don't update. Take the
opportunity to explain the rationale behind the check.
This CVE-2023-34323 / XSA-440.
Fixes: dbef1f748289 ("tools/xenstore: simplify and fix per domain node accounting") Signed-off-by: Julien Grall <jgrall@amazon.com> Reviewed-by: Juergen Gross <jgross@suse.com>
Jan Beulich [Wed, 20 Sep 2023 09:33:26 +0000 (10:33 +0100)]
x86/shadow: defer releasing of PV's top-level shadow reference
sh_set_toplevel_shadow() re-pinning the top-level shadow we may be
running on is not enough (and at the same time unnecessary when the
shadow isn't what we're running on): That shadow becomes eligible for
blowing away (from e.g. shadow_prealloc()) immediately after the
paging lock was dropped. Yet it needs to remain valid until the actual
page table switch occurred.
Propagate up the call chain the shadow entry that needs releasing
eventually, and carry out the release immediately after switching page
tables. Handle update_cr3() failures by switching to idle pagetables.
Note that various further uses of update_cr3() are HVM-only or only act
on paused vCPU-s, in which case sh_set_toplevel_shadow() will not defer
releasing of the reference.
While changing the update_cr3() hook, also convert the "do_locking"
parameter to boolean.
This is CVE-2023-34322 / XSA-438.
Reported-by: Tim Deegan <tim@xen.org> Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: George Dunlap <george.dunlap@cloud.com>
(cherry picked from commit fb0ff49fe9f784bfee0370c2a3c5f20e39d7a1cb)
Andrew Cooper [Wed, 30 Aug 2023 19:24:25 +0000 (20:24 +0100)]
x86/spec-ctrl: Mitigate the Zen1 DIV leakage
In the Zen1 microarchitecure, there is one divider in the pipeline which
services uops from both threads. In the case of #DE, the latched result from
the previous DIV to execute will be forwarded speculatively.
This is an interesting covert channel that allows two threads to communicate
without any system calls. In also allows userspace to obtain the result of
the most recent DIV instruction executed (even speculatively) in the core,
which can be from a higher privilege context.
Scrub the result from the divider by executing a non-faulting divide. This
needs performing on the exit-to-guest paths, and ist_exit-to-Xen.
Alternatives in IST context is believed safe now that it's done in NMI
context.
This is XSA-439 / CVE-2023-20588.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit b5926c6ecf05c28ee99c6248c42d691ccbf0c315)
Andrew Cooper [Fri, 15 Sep 2023 11:13:51 +0000 (12:13 +0100)]
x86/amd: Introduce is_zen{1,2}_uarch() predicates
We already have 3 cases using STIBP as a Zen1/2 heuristic, and are about to
introduce a 4th. Wrap the heuristic into a pair of predicates rather than
opencoding it, and the explanation of the heuristic, at each usage site.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit de1d265001397f308c5c3c5d3ffc30e7ef8c0705)
Andrew Cooper [Wed, 13 Sep 2023 12:53:33 +0000 (13:53 +0100)]
x86/spec-ctrl: Issue VERW during IST exit to Xen
There is a corner case where e.g. an NMI hitting an exit-to-guest path after
SPEC_CTRL_EXIT_TO_* would have run the entire NMI handler *after* the VERW
flush to scrub potentially sensitive data from uarch buffers.
In order to compensate, issue VERW when exiting to Xen from an IST entry.
SPEC_CTRL_EXIT_TO_XEN already has two reads of spec_ctrl_flags off the stack,
and we're about to add a third. Load the field into %ebx, and list the
register as clobbered.
%r12 has been arranged to be the ist_exit signal, so add this as an input
dependency and use it to identify when to issue a VERW.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit 3ee6066bcd737756b0990d417d94eddc0b0d2585)
Andrew Cooper [Wed, 13 Sep 2023 11:20:12 +0000 (12:20 +0100)]
x86/entry: Track the IST-ness of an entry for the exit paths
Use %r12 to hold an ist_exit boolean. This register is zero elsewhere in the
entry/exit asm, so it only needs setting in the IST path.
As this is subtle and fragile, add check_ist_exit() to be used in debugging
builds to cross-check that the ist_exit boolean matches the entry vector.
Write check_ist_exit() it in C, because it's debug only and the logic more
complicated than I care to maintain in asm.
For now, we only need to use this signal in the exit-to-Xen path, but some
exit-to-guest paths happen in IST context too. Check the correctness in all
exit paths to avoid the logic bit-rotting.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit 21bdc25b05a0f8ab6bc73520a9ca01327360732c)
x86/entry: Partially revert IST-exit checks
The patch adding check_ist_exit() didn't account for the fact that
reset_stack_and_jump() is not an ABI-preserving boundary. The IST-ness in
%r12 doesn't survive into the next context, and is a stale value C.
There's no straightforward way to reconstruct the IST-exit-ness on the
exit-to-guest path after a context switch. For now, we only need IST-exit on
the return-to-Xen path.
Fixes: 21bdc25b05a0 ("x86/entry: Track the IST-ness of an entry for the exit paths") Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit 9b57c800b79b96769ea3dcd6468578fa664d19f9)
Andrew Cooper [Wed, 13 Sep 2023 12:48:16 +0000 (13:48 +0100)]
x86/entry: Adjust restore_all_xen to hold stack_end in %r14
All other SPEC_CTRL_{ENTRY,EXIT}_* helpers hold stack_end in %r14. Adjust it
for consistency.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit 7aa28849a1155d856e214e9a80a7e65fffdc3e58)
Andrew Cooper [Wed, 30 Aug 2023 19:11:50 +0000 (20:11 +0100)]
x86/spec-ctrl: Improve all SPEC_CTRL_{ENTER,EXIT}_* comments
... to better explain how they're used.
Doing so highlights that SPEC_CTRL_EXIT_TO_XEN is missing a VERW flush for the
corner case when e.g. an NMI hits late in an exit-to-guest path.
Leave a TODO, which will be addressed in subsequent patches which arrange for
VERW flushing to be safe within SPEC_CTRL_EXIT_TO_XEN.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit 45f00557350dc7d0756551069803fc49c29184ca)
Andrew Cooper [Fri, 1 Sep 2023 10:38:44 +0000 (11:38 +0100)]
x86/spec-ctrl: Turn the remaining SPEC_CTRL_{ENTRY,EXIT}_* into asm macros
These have grown more complex over time, with some already having been
converted.
Provide full Requires/Clobbers comments, otherwise missing at this level of
indirection.
No functional change.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit 7125429aafb9e3c9c88fc93001fc2300e0ac2cc8)
Andrew Cooper [Tue, 12 Sep 2023 16:03:16 +0000 (17:03 +0100)]
x86/spec-ctrl: Fold DO_SPEC_CTRL_EXIT_TO_XEN into it's single user
With the SPEC_CTRL_EXIT_TO_XEN{,_IST} confusion fixed, it's now obvious that
there's only a single EXIT_TO_XEN path. Fold DO_SPEC_CTRL_EXIT_TO_XEN into
SPEC_CTRL_EXIT_TO_XEN to simplify further fixes.
When merging labels, switch the name to .L\@_skip_sc_msr as "skip" on its own
is going to be too generic shortly.
No functional change.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit 694bb0f280fd08a4377e36e32b84b5062def4de2)
Andrew Cooper [Tue, 12 Sep 2023 14:06:49 +0000 (15:06 +0100)]
x86/spec-ctrl: Fix confusion between SPEC_CTRL_EXIT_TO_XEN{,_IST}
c/s 3fffaf9c13e9 ("x86/entry: Avoid using alternatives in NMI/#MC paths")
dropped the only user, leaving behind the (incorrect) implication that Xen had
split exit paths.
Delete the unused SPEC_CTRL_EXIT_TO_XEN and rename SPEC_CTRL_EXIT_TO_XEN_IST
to SPEC_CTRL_EXIT_TO_XEN for consistency.
No functional change.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit 1c18d73774533a55ba9d1cbee8bdace03efdb5e7)
Jan Beulich [Wed, 23 Aug 2023 07:26:36 +0000 (09:26 +0200)]
x86/AMD: extend Zenbleed check to models "good" ucode isn't known for
Reportedly the AMD Custom APU 0405 found on SteamDeck, models 0x90 and
0x91, (quoting the respective Linux commit) is similarly affected. Put
another instance of our Zen1 vs Zen2 distinction checks in
amd_check_zenbleed(), forcing use of the chickenbit irrespective of
ucode version (building upon real hardware never surfacing a version of
0xffffffff).
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(cherry picked from commit 145a69c0944ac70cfcf9d247c85dee9e99d9d302)
xen/arm: page: Handle cache flush of an element at the top of the address space
The region that needs to be cleaned/invalidated may be at the top
of the address space. This means that 'end' (i.e. 'p + size') will
be 0 and therefore nothing will be cleaned/invalidated as the check
in the loop will always be false.
On Arm64, we only support we only support up to 48-bit Virtual
address space. So this is not a concern there. However, for 32-bit,
the mapcache is using the last 2GB of the address space. Therefore
we may not clean/invalidate properly some pages. This could lead
to memory corruption or data leakage (the scrubbed value may
still sit in the cache when the guest could read directly the memory
and therefore read the old content).
Rework invalidate_dcache_va_range(), clean_dcache_va_range(),
clean_and_invalidate_dcache_va_range() to handle a cache flush
with an element at the top of the address space.
x86/irq: fix reporting of spurious i8259 interrupts
The return value of bogus_8259A_irq() is wrong: the function will
return `true` when the IRQ is real and `false` when it's a spurious
IRQ. This causes the "No irq handler for vector ..." message in
do_IRQ() to be printed for spurious i8259 interrupts which is not
intended (and not helpful).
Fix by inverting the return value of bogus_8259A_irq().
Fixes: 132906348a14 ('x86/i8259: Handle bogus spurious interrupts more quietly') Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 709f6c8ce6422475c372e67507606170a31ccb65
master date: 2023-08-30 10:03:53 +0200
Andrew Cooper [Tue, 5 Sep 2023 06:53:31 +0000 (08:53 +0200)]
x86/vmx: Revert "x86/VMX: sanitize rIP before re-entering guest"
At the time of XSA-170, the x86 instruction emulator was genuinely broken. It
would load arbitrary values into %rip and putting a check here probably was
the best stopgap security fix. It should have been reverted following c/s 81d3a0b26c1 "x86emul: limit-check branch targets" which corrected the emulator
behaviour.
However, everyone involved in XSA-170, myself included, failed to read the SDM
correctly. On the subject of %rip consistency checks, the SDM stated:
If the processor supports N < 64 linear-address bits, bits 63:N must be
identical
A non-canonical %rip (and SSP more recently) is an explicitly legal state in
x86, and the VMEntry consistency checks are intentionally off-by-one from a
regular canonical check.
The consequence of this bug is that Xen will currently take a legal x86 state
which would successfully VMEnter, and corrupt it into having non-architectural
behaviour.
Furthermore, in the time this bugfix has been pending in public, I
successfully persuaded Intel to clarify the SDM, adding the following
clarification:
The guest RIP value is not required to be canonical; the value of bit N-1
may differ from that of bit N.
Fixes: ffbbfda377 ("x86/VMX: sanitize rIP before re-entering guest") Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
master commit: 10c83bb0f5d158d101d983883741b76f927e54a3
master date: 2023-08-23 18:44:59 +0100