Jan Beulich [Thu, 15 Apr 2021 11:43:51 +0000 (13:43 +0200)]
x86: avoid building COMPAT code when !HVM && !PV32
It was probably a mistake to, over time, drop various CONFIG_COMPAT
conditionals from x86-specific code, as we now have a build
configuration again where we'd prefer this to be unset. Arrange for
CONFIG_COMPAT to actually be off in this case, dealing with fallout.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Wei Liu <wl@xen.org>
Jan Beulich [Thu, 15 Apr 2021 11:35:32 +0000 (13:35 +0200)]
x86: slim down hypercall handling when !PV32
In such a build various of the compat handlers aren't needed. Don't
reference them from the hypercall table, and compile out those which
aren't needed for HVM. Also compile out switch_compat(), which has no
purpose in such a build.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Wei Liu <wl@xen.org>
Jan Beulich [Thu, 15 Apr 2021 11:34:29 +0000 (13:34 +0200)]
x86: don't build unused entry code when !PV32
Except for the initial part of cstar_enter compat/entry.S is all dead
code in this case. Further, along the lines of the PV conditionals we
already have in entry.S, make code PV32-conditional there too (to a
fair part because this code actually references compat/entry.S).
This has the side effect of moving the tail part (now at compat_syscall)
of the code out of .text.entry (in line with e.g. compat_sysenter).
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Wei Liu <wl@xen.org>
Andrew Cooper [Fri, 2 Apr 2021 13:10:25 +0000 (14:10 +0100)]
x86/cpuid: Advertise no-lmsl unilaterally to hvm guests
While part of the original AMD64 spec, Long Mode Segment Limit was a feature
not picked up by Intel, and therefore didn't see much adoption in software.
AMD have finally dropped the feature from hardware, and allocated a CPUID bit
to indicate its absence.
Xen has never supported the feature for guests, even when running on capable
hardware, so advertise the feature's absence unilaterally.
There is nothing specifically wrong with exposing this bit to PV guests, but
the PV ABI doesn't include a working concept of MSR_EFER in the first place,
so exposing it to PV guests would be out-of-place.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Jan Beulich [Tue, 13 Apr 2021 08:18:08 +0000 (10:18 +0200)]
x86/EPT: minor local variable adjustment in ept_set_entry()
Not having direct_mmio (used only once anyway) as a local variable gets
the epte_get_entry_emt() invocation here in better sync with the other
ones. While at it also reduce ipat's scope.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Jan Beulich [Tue, 13 Apr 2021 08:14:23 +0000 (10:14 +0200)]
VT-d: improve save/restore of registers across S3
The static allocation of the save space is not only very inefficient
(most of the array slots won't ever get used), but is also the sole
reason for a build-time upper bound on the number of IOMMUs. Introduce
a structure containing just the one needed field we can't (easily)
restore from other in-memory state, and allocate the respective
array dynamically.
Take the opportunity and make the FEUADDR write dependent upon
x2apic_enabled, like is already the case in dma_msi_set_affinity().
Also alter properties of nr_iommus: static, unsigned, and __initdata.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Jan Beulich [Mon, 12 Apr 2021 10:37:19 +0000 (12:37 +0200)]
x86/shadow: adjust is_pv_*() checks
To cover for "x86: correct is_pv_domain() when !CONFIG_PV" (or any other
change along those lines) we should prefer is_hvm_*(), as it may become
a build time constant while is_pv_*() generally won't.
Also when a domain pointer is in scope, prefer is_*_domain() over
is_*_vcpu().
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Tim Deegan <tim@xen.org>
Jan Beulich [Mon, 12 Apr 2021 10:34:04 +0000 (12:34 +0200)]
x86/shadow: only 4-level guest code needs building when !HVM
In order to limit #ifdef-ary, provide "stub" #define-s for
SH_type_{l1,fl1,l2}_{32,pae}_shadow and SHF_{L1,FL1,L2}_{32,PAE}.
The change in shadow_vcpu_init() is necessary to cover for "x86: correct
is_pv_domain() when !CONFIG_PV" (or any other change along those lines)
- we should only rely on is_hvm_*() to become a build time constant.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Tim Deegan <tim@xen.org>
Jan Beulich [Mon, 12 Apr 2021 10:32:50 +0000 (12:32 +0200)]
x86/shadow: SH_type_l2h_shadow is PV-only
..., i.e. being used only with 4 guest paging levels. Drop its L2/PAE
alias and adjust / drop conditionals. Use >= 4 where touching them
anyway, in preparation for 5-level paging.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Tim Deegan <tim@xen.org>
Jan Beulich [Mon, 12 Apr 2021 10:32:18 +0000 (12:32 +0200)]
x86/shadow: don't open-code SHF_* shorthands
Use SHF_L1_ANY, SHF_32, SHF_PAE, as well as SHF_64, and introduce
SHF_FL1_ANY.
Note that in shadow_audit_tables() this has the effect of no longer
(I assume mistakenly, or else I don't see why the respective callback
table entry isn't NULL) excluding SHF_L2H_64.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Tim Deegan <tim@xen.org>
Jan Beulich [Mon, 12 Apr 2021 10:31:19 +0000 (12:31 +0200)]
x86/shadow: move shadow_set_l<N>e() to their own source file
The few GUEST_PAGING_LEVELS dependencies (of shadow_set_l2e() only) can
be easily expressed by function parameters; I suppose the extra indirect
call is acceptable for the increasingly little used 32-bit non-PAE case.
This way shadow_set_l[12]e(), each of which compiles to almost 1k of
code, need building just once.
The implication is the need for some "relaxation" in types.h: The
underlying PTE types don't vary anymore (and aren't expected to down the
road), so they as well as some basic helpers can be exposed even in the
new, artificial GUEST_PAGING_LEVELS == 0 case.
Almost pure code movement - exceptions are the conversion of
"#if GUEST_PAGING_LEVELS == 2" to runtime conditionals and style
corrections (including to avoid open-coding mfn_to_maddr() and
PAGE_OFFSET()).
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Tim Deegan <tim@xen.org>
Jan Beulich [Mon, 12 Apr 2021 10:30:13 +0000 (12:30 +0200)]
x86/shadow: polish shadow_write_entries()
First of all, avoid the initial dummy write: Try to write the actual
new value instead, and start the loop from 1 if this was successful.
Further, drop safe_write_entry() and use write_atomic() instead. This
eliminates the need for the BUILD_BUG_ON() there at the same time.
Then
- use const and unsigned,
- drop a redundant NULL check,
- don't open-code PAGE_OFFSET() and IS_ALIGNED(),
- adjust comment style.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Tim Deegan <tim@xen.org>
Jan Beulich [Mon, 12 Apr 2021 10:26:54 +0000 (12:26 +0200)]
gunzip: drop INIT{,DATA} and STATIC
There's no need for the extra abstraction.
Requested-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Jan Beulich [Mon, 12 Apr 2021 10:26:18 +0000 (12:26 +0200)]
libxenguest: simplify kernel decompression
In all cases the kernel build makes available the uncompressed size in
the final 4 bytes of the bzImage payload. Utilize this to avoid
repeated realloc()ing of the output buffer.
As a side effect this also addresses the previous mistaken return of 0
(success) from xc_try_{bzip2,lzma,xz}_decode() in case
xc_dom_register_external() would have failed.
As another side effect this also addresses the first error path of
_xc_try_lzma_decode() previously bypassing lzma_end().
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Wei Liu <wl@xen.org>
Andrew Cooper [Sat, 16 Jan 2021 16:09:10 +0000 (16:09 +0000)]
xen/xsm: Improve alloc/free of evtchn buckets
Currently, flask_alloc_security_evtchn() is called in loops of
64 (EVTCHNS_PER_BUCKET), which for non-dummy implementations is a function
pointer call even in the no-op case. The non no-op case only sets a single
constant, and doesn't actually fail.
Spectre v2 protections has made function pointer calls far more expensive, and
64 back-to-back calls is a waste. Rework the APIs to pass the size of the
bucket instead, and call them once.
No practical change, but {alloc,free}_evtchn_bucket() should be rather more
efficient now.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Daniel P. Smith <dpsmith@apertussolutions.com>
xen/gunzip: Fix build with clang after 33bc2a8495f7
The compilation will fail when building Xen with clang and
CONFIG_DEBUG=y:
make[4]: Leaving directory '/oss/xen/xen/common/libelf'
INIT_O gunzip.init.o
Error: size of gunzip.o:.text is 0x00000019
This is because the function init_allocator() will not be inlined
and is not part of the init section.
Fix it by marking init_allocator() with INIT.
Fixes: 33bc2a8495f7 ("xen/gunzip: Allow perform_gunzip() to be called multiple times") Reported-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: Julien Grall <jgrall@amazon.com> Acked-by: Jan Beulich <jbeulich@suse.com>
There is a difference in generated code: xzalloc_bytes() forces
SMP_CACHE_BYTES alignment. I think we not only don't need this here, but
actually don't want it.
To avoid the need to add a cast, do away with the only forward-declared
struct hypfs_dyndata.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Juergen Gross <jgross@suse.com>
There is a difference in generated code: xzalloc_bytes() forces
SMP_CACHE_BYTES alignment. I think we not only don't need this here, but
actually don't want it.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Drop hvm_irq_size(), which exists for just this purpose.
There is a difference in generated code: xzalloc_bytes() forces
SMP_CACHE_BYTES alignment. I think we not only don't need this here, but
actually don't want it.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Boris Ostrovsky [Fri, 9 Apr 2021 07:22:04 +0000 (09:22 +0200)]
x86/vpt: simplify locking argument to write_{,un}lock
Make pt_adjust_vcpu() call write_{,un}lock with less indirection, like
create_periodic_time() already does.
Requested-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Boris Ostrovsky [Fri, 9 Apr 2021 07:21:27 +0000 (09:21 +0200)]
x86/vpt: do not take pt_migrate rwlock in some cases
Commit 8e76aef72820 ("x86/vpt: fix race when migrating timers between
vCPUs") addressed XSA-336 by introducing a per-domain rwlock that was
intended to protect periodic timer during VCPU migration. Since such
migration is an infrequent event no performance impact was expected.
Unfortunately this turned out not to be the case: on a fairly large
guest (92 VCPUs) we've observed as much as 40% TPCC performance
regression with some guest kernels. Further investigation pointed to
pt_migrate read lock taken in pt_update_irq() as the largest contributor
to this regression. With large number of VCPUs and large number of VMEXITs
(from where pt_update_irq() is always called) the update of an atomic in
read_lock() is thought to be the main cause.
Stephen Brennan analyzed locking pattern and classified lock users as
follows:
1. Functions which read (maybe write) all periodic_time instances attached
to a particular vCPU. These are functions which use pt_vcpu_lock() such
as pt_restore_timer(), pt_save_timer(), etc.
2. Functions which want to modify a particular periodic_time object.
These functions lock whichever vCPU the periodic_time is attached to, but
since the vCPU could be modified without holding any lock, they are
vulnerable to XSA-336. Functions in this group use pt_lock(), such as
pt_timer_fn() or destroy_periodic_time().
3. Functions which not only want to modify the periodic_time, but also
would like to modify the =vcpu= fields. These are create_periodic_time()
or pt_adjust_vcpu(). They create XSA-336 conditions for group 2, but we
can't simply hold 2 vcpu locks due to the deadlock risk.
Roger then pointed out that group 1 functions don't really need to hold
the pt_migrate rwlock and that instead groups 2 and 3 should hold per-vcpu
lock whenever they modify per-vcpu timer lists.
Suggested-by: Stephen Brennan <stephen.s.brennan@oracle.com> Suggested-by: Roger Pau Monné <roger.pau@citrix.com> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Stephen Brennan <stephen.s.brennan@oracle.com>
The for loop in unmap_domain_pirq is unnecessary complicated, with
several places where the index is incremented, and also different
exit conditions spread between the loop body.
Simplify it by looping over each possible PIRQ using the for loop
syntax, and remove all possible in-loop exit points.
No functional change intended.
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Jan Beulich [Fri, 9 Apr 2021 07:20:15 +0000 (09:20 +0200)]
x86/shadow: encode full GFN in magic MMIO entries
Since we don't need to encode all of the PTE flags, we have enough bits
in the shadow entry to store the full GFN. Limit use of literal numbers
a little and instead derive some of the involved values. Sanity-check
the result via BUILD_BUG_ON()s.
This then allows dropping from sh_l1e_mmio() again the guarding against
too large GFNs. It needs replacing by an L1TF safety check though, which
in turn requires exposing cpu_has_bug_l1tf.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Tim Deegan <tim@xen.org>
Jan Beulich [Fri, 9 Apr 2021 07:19:18 +0000 (09:19 +0200)]
x86/PV32: avoid TLB flushing after mod_l3_entry()
32-bit guests may not depend upon the side effect of using ordinary
4-level paging when running on a 64-bit hypervisor. For L3 entry updates
to take effect, they have to use a CR3 reload. Therefore there's no need
to issue a paging structure invalidating TLB flush in this case.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Jan Beulich [Fri, 9 Apr 2021 07:18:51 +0000 (09:18 +0200)]
x86/PV: restrict TLB flushing after mod_l[234]_entry()
Just like we avoid to invoke remote root pt flushes when all uses of an
L4 table can be accounted for locally, the same can be done for all of
L[234] for the linear pt flush when the table is a "free floating" one,
i.e. it is pinned but not hooked up anywhere. While this situation
doesn't occur very often, it can be observed.
Since this breaks one of the implications of the XSA-286 fix, drop the
flush_root_pt_local variable again and set ->root_pgt_changed directly,
just like it was before that change.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Jan Beulich [Fri, 9 Apr 2021 07:18:17 +0000 (09:18 +0200)]
x86/PV: _PAGE_RW changes may take fast path of mod_l[234]_entry()
The only time _PAGE_RW matters when validating an L2 or higher entry is
when a linear page table is tried to be installed (see the comment ahead
of define_get_linear_pagetable()). Therefore when we disallow such at
build time, we can allow _PAGE_RW changes to take the fast paths there.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Jan Beulich [Fri, 9 Apr 2021 07:17:04 +0000 (09:17 +0200)]
x86: limit amount of INT3 in IND_THUNK_*
There's no point having every replacement variant to also specify the
INT3 - just have it once in the base macro. When patching, NOPs will get
inserted, which are fine to speculate through (until reaching the INT3).
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Jan Beulich [Fri, 9 Apr 2021 07:16:22 +0000 (09:16 +0200)]
x86: guard against straight-line speculation past RET
Under certain conditions CPUs can speculate into the instruction stream
past a RET instruction. Guard against this just like 3b7dab93f240
("x86/spec-ctrl: Protect against CALL/JMP straight-line speculation")
did - by inserting an "INT $3" insn. It's merely the mechanics of how to
achieve this that differ: A set of macros gets introduced to post-
process RET insns issued by the compiler (or living in assembly files).
Unfortunately for clang this requires further features their built-in
assembler doesn't support: We need to be able to override insn mnemonics
produced by the compiler (which may be impossible, if internally
assembly mnemonics never get generated).
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Jan Beulich [Fri, 9 Apr 2021 07:15:38 +0000 (09:15 +0200)]
x86/PV: make post-migration page state consistent
When a page table page gets de-validated, its type reference count drops
to zero (and PGT_validated gets cleared), but its type remains intact.
XEN_DOMCTL_getpageframeinfo3, therefore, so far reported prior usage for
such pages. An intermediate write to such a page via e.g.
MMU_NORMAL_PT_UPDATE, however, would transition the page's type to
PGT_writable_page, thus altering what XEN_DOMCTL_getpageframeinfo3 would
return. In libxc the decision which pages to normalize / localize
depends solely on the type returned from the domctl. As a result without
further precautions the guest won't be able to tell whether such a page
has had its (apparent) PTE entries transitioned to the new MFNs.
Add a check of PGT_validated, thus consistently avoiding normalization /
localization in the tool stack.
Also use XEN_DOMCTL_PFINFO_NOTAB in the variable's initializer instead
open coding it.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Jan Beulich [Fri, 9 Apr 2021 07:14:58 +0000 (09:14 +0200)]
libxg: don't use max policy in xc_cpuid_xend_policy()
using max undermines the separation between default and max. For
example, turning off AVX512F on an MPX-capable system silently turns on
MPX, despite this not being part of the default policy anymore. Since
the information is used only for determining what to convert 'x' to (but
not to e.g. validate '1' settings), the effect of this change is
identical for guests with (suitable) "cpuid=" settings to that of the
changes separating default from max and then converting (e.g.) MPX from
being part of default to only being part of max for guests without
(affected) "cpuid=" settings.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Jan Beulich [Fri, 9 Apr 2021 07:12:51 +0000 (09:12 +0200)]
x86: refine guest_mode()
The 2nd of the assertions as well as the macro's return value have been
assuming we're on the primary stack. While for most IST exceptions we
switch back to the main one when user mode was interrupted, for #DF we
intentionally never do, and hence a #DF actually triggering on a user
mode insn (which then is still a Xen bug) would in turn trigger this
assertion, rather than cleanly logging state.
Reported-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Julien Grall [Thu, 21 Jan 2021 11:12:00 +0000 (11:12 +0000)]
xen/page_alloc: Don't hold the heap_lock when clearing PGC_need_scrub
Currently, the heap_lock is held when clearing PGC_need_scrub in
alloc_heap_pages(). However, this is unnecessary because the only caller
(mark_page_offline()) that can concurrently modify the count_info is
using cmpxchg() in a loop.
Therefore, rework the code to avoid holding the heap_lock and use
test_and_clear_bit() instead.
Signed-off-by: Julien Grall <jgrall@amazon.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Jan Beulich [Wed, 7 Apr 2021 10:24:45 +0000 (12:24 +0200)]
fix for_each_cpu() again for NR_CPUS=1
Unfortunately aa50f45332f1 ("xen: fix for_each_cpu when NR_CPUS=1") has
caused quite a bit of fallout with gcc10, e.g. (there are at least two
more similar ones, and I didn't bother trying to find them all):
In file included from .../xen/include/xen/config.h:13,
from <command-line>:
core_parking.c: In function ‘core_parking_power’:
.../xen/include/asm/percpu.h:12:51: error: array subscript 1 is above array bounds of ‘long unsigned int[1]’ [-Werror=array-bounds]
12 | (*RELOC_HIDE(&per_cpu__##var, __per_cpu_offset[cpu]))
.../xen/include/xen/compiler.h:141:29: note: in definition of macro ‘RELOC_HIDE’
141 | (typeof(ptr)) (__ptr + (off)); })
| ^~~
core_parking.c:133:39: note: in expansion of macro ‘per_cpu’
133 | core_tmp = cpumask_weight(per_cpu(cpu_core_mask, cpu));
| ^~~~~~~
In file included from .../xen/include/xen/percpu.h:4,
from .../xen/include/asm/msr.h:7,
from .../xen/include/asm/time.h:5,
from .../xen/include/xen/time.h:76,
from .../xen/include/xen/spinlock.h:4,
from .../xen/include/xen/cpu.h:5,
from core_parking.c:19:
.../xen/include/asm/percpu.h:6:22: note: while referencing ‘__per_cpu_offset’
6 | extern unsigned long __per_cpu_offset[NR_CPUS];
| ^~~~~~~~~~~~~~~~
One of the further errors even went as far as claiming that an array
index (range) of [0, 0] was outside the bounds of a [1] array, so
something fishy is pretty clearly going on there.
The compiler apparently wants to be able to see that the loop isn't
really a loop in order to avoid triggering such warnings, yet what
exactly makes it consider the loop exit condition constant and within
the [0, 1] range isn't obvious - using ((mask)->bits[0] & 1) instead of
cpumask_test_cpu() for example did _not_ help.
Re-instate a special form of for_each_cpu(), experimentally "proven" to
avoid the diagnostics.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Dario Faggioli <dfaggioli@suse.com>
xen/x86: shadow: The return type of sh_audit_flags() should be const
The function sh_audit_flags() is returning pointer to literal strings.
They should not be modified, so the return is now const and this is
propagated to the callers.
Take the opportunity to fix the coding style in the declaration of
sh_audit_flags.
Signed-off-by: Julien Grall <jgrall@amazon.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Tim Deegan <tim@xen.org>
Julien Grall [Wed, 3 Mar 2021 19:27:56 +0000 (19:27 +0000)]
xen/gunzip: Allow perform_gunzip() to be called multiple times
Currently perform_gunzip() can only be called once because the
the internal state (e.g allocate) is not fully re-initialized.
This works fine if you are only booting dom0. But this will break when
booting multiple using the dom0less that uses compressed kernel images.
This can be resolved by re-initializing bytes_out, malloc_ptr,
malloc_count every time perform_gunzip() is called.
Note the latter is only re-initialized for hardening purpose as there is
no guarantee that every malloc() are followed by free() (It should in
theory!).
Take the opportunity to check the return of alloc_heap_pages() to return
an error rather than dereferencing a NULL pointer later on failure.
Reported-by: Charles Chiou <cchiou@ambarella.com> Signed-off-by: Julien Grall <jgrall@amazon.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
George Dunlap [Thu, 1 Apr 2021 13:34:04 +0000 (14:34 +0100)]
CHANGELOG.md: irq-max-guests
Signed-off-by: George Dunlap <george.dunlap@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com>
--- CC: Igor Druzhinin <igor.druzhinin@citrix.com> CC: Jan Beulich <jbeulich@suse.com> CC: Ian Jackson <ian.jackson@citrix.com>
George Dunlap [Thu, 1 Apr 2021 13:18:36 +0000 (14:18 +0100)]
CHANGELOG.md: Various new entries, mostly x86
...Grouped mostly by submitter / maintainer
Signed-off-by: George Dunlap <george.dunlap@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com>
--- CC: Ian Jackson <ian.jackson@citrix.com> CC: Andrew Cooper <andrew.cooper3@citrix.com> CC: Jan Beulich <jbeulich@suse.com> CC: Roger Pau Monne <roger.pau@citrix.com>
George Dunlap [Thu, 1 Apr 2021 13:06:36 +0000 (14:06 +0100)]
CHANGELOG.md: Some additional affordances in various xl subcommands
Signed-off-by: George Dunlap <george.dunlap@citrix.com> Signed-off-by: Ian Jackson <ian.jackson@citrix.com>
--- CC: Ian Jackson <ian.jackson@citrix.com>
George Dunlap [Thu, 1 Apr 2021 12:54:10 +0000 (13:54 +0100)]
CHANGELOG.md: xl PCI configuration doc, xenstore MTU entries
Signed-off-by: George Dunlap <george.dunlap@citrix.com> Reviewed-by: Paul Durrant <paul@xen.org> Release-acked-by: Ian Jackson <ian.jackson@citrix.com>
--- CC: Paul Durrant <paul@xen.org> CC: Ian Jackson <ian.jackson@citrix.com> CC: Wei Liu <wl@xen.org>
George Dunlap [Thu, 1 Apr 2021 12:51:48 +0000 (13:51 +0100)]
CHANGELOG.md: Mention XEN_SCRIPT_DIR
Signed-off-by: George Dunlap <george.dunlap@citrix.com> Reviewed-by: Ian Jackson <iwj@xenproject.org> Release-acked-by: Ian Jackson <iwj@xenproject.org>
---
CC: Olaf Hering <olaf@aepfle.de> CC: Ian Jackson <iwj@xenproject.org>
Jan Beulich [Tue, 6 Apr 2021 14:17:42 +0000 (16:17 +0200)]
common: map_vcpu_info() cosmetics
Use ENXIO instead of EINVAL to cover the two cases of the address not
satisfying the requirements. This will make an issue here better stand
out at the call site.
Also add a missing compat-mode related size check: If the sizes
differed, other code in the function would need changing. Accompany this
by a change to the initial sizeof() expression, tying it to the type of
the variable we're actually after (matching e.g. the alignof() added by
XSA-327).
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Julien Grall <jgrall@amazon.com>
This patch fix the stream match conflict issue when two devices have the
same stream-id.
Only difference while applying this patch with regard to Linux patch are
as follows:
1. Spinlock is used in place of mutex when attaching a device to the
SMMU via arm_smmu_master_alloc_smes(..) function call.Replacing the
mutex with spinlock is fine here as we are configuring the hardware
via registers and it is very fast.
2. move iommu_group_alloc(..) function call in arm_smmu_add_device(..)
function from the start of the function to the end.
Original commit message:
iommu/arm-smmu: Intelligent SMR allocation
Stream Match Registers are one of the more awkward parts of the SMMUv2
architecture; there are typically never enough to assign one to each
stream ID in the system, and configuring them such that a single ID
matches multiple entries is catastrophically bad - at best, every
transaction raises a global fault; at worst, they go *somewhere*.
To address the former issue, we can mask ID bits such that a single
register may be used to match multiple IDs belonging to the same device
or group, but doing so also heightens the risk of the latter problem
(which can be nasty to debug).
Tackle both problems at once by replacing the simple bitmap allocator
with something much cleverer. Now that we have convenient in-memory
representations of the stream mapping table, it becomes straightforward
to properly validate new SMR entries against the current state, opening
the door to arbitrary masking and SMR sharing.
Another feature which falls out of this is that with IDs shared by
separate devices being automatically accounted for, simply associating a
group pointer with the S2CR offers appropriate group allocation almost
for free, so hook that up in the process.
This patch is the preparatory work to fix the stream match conflict
when two devices have the same stream-id.
Original commit message:
iommu/arm-smmu: Add a stream map entry iterator
We iterate over the SMEs associated with a master config quite a lot in
various places, and are about to do so even more. Let's wrap the idiom
in a handy iterator macro before the repetition gets out of hand.
Tested-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Rahul Singh <rahul.singh@arm.com> Acked-by: Stefano Stabellini <sstabellini@kernel.org> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com> Acked-by: Julien Grall <jgrall@amazon.com>
This patch is the preparatory work to fix the stream match conflict
when two devices have the same stream-id.
Original commit message:
iommu/arm-smmu: Keep track of S2CR state
Making S2CRs first-class citizens within the driver with a high-level
representation of their state offers a neat solution to a few problems:
Firstly, the information about which context a device's stream IDs are
associated with is already present by necessity in the S2CR. With that
state easily accessible we can refer directly to it and obviate the need
to track an IOMMU domain in each device's archdata (its earlier purpose
of enforcing correct attachment of multi-device groups now being handled
by the IOMMU core itself).
Secondly, the core API now deprecates explicit domain detach and expects
domain attach to move devices smoothly from one domain to another; for
SMMUv2, this notion maps directly to simply rewriting the S2CRs assigned
to the device. By giving the driver a suitable abstraction of those
S2CRs to work with, we can massively reduce the overhead of the current
heavy-handed "detach, free resources, reallocate resources, attach"
approach.
Thirdly, making the software state hardware-shaped and attached to the
SMMU instance once again makes suspend/resume of this register group
that much simpler to implement in future.
This patch is the preparatory work to fix the stream match conflict
when two devices have the same stream-id.
Original commit message:
iommu/arm-smmu: Consolidate stream map entry state
In order to consider SMR masking, we really want to be able to validate
ID/mask pairs against existing SMR contents to prevent stream match
conflicts, which at best would cause transactions to fault unexpectedly,
and at worst lead to silent unpredictable behaviour. With our SMMU
instance data holding only an allocator bitmap, and the SMR values
themselves scattered across master configs hanging off devices which we
may have no way of finding, there's essentially no way short of digging
everything back out of the hardware. Similarly, the thought of power
management ops to support suspend/resume faces the exact same problem.
By massaging the software state into a closer shape to the underlying
hardware, everything comes together quite nicely; the allocator and the
high-level view of the data become a single centralised state which we
can easily keep track of, and to which any updates can be validated in
full before being synchronised to the hardware itself.
This patch is the preparatory work to fix the stream match conflict
when two devices have the same stream-id.
Original commit message:
iommu/arm-smmu: Handle stream IDs more dynamically
Rather than assuming fixed worst-case values for stream IDs and SMR
masks, keep track of whatever implemented bits the hardware actually
reports. This also obviates the slightly questionable validation of SMR
fields in isolation - rather than aborting the whole SMMU probe for a
hardware configuration which is still architecturally valid, we can
simply refuse masters later if they try to claim an unrepresentable ID
or mask (which almost certainly implies a DT error anyway).
Acked-by: Will Deacon <will.deacon@arm.com> Tested-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Rahul Singh <rahul.singh@arm.com> Acked-by: Stefano Stabellini <sstabellini@kernel.org> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com> Acked-by: Julien Grall <jgrall@amazon.com>
Michal Orzel [Mon, 22 Mar 2021 08:17:15 +0000 (09:17 +0100)]
arm: Add Kconfig entry to select CONFIG_DTB_FILE
Currently in order to link existing DTB into Xen image
we need to either specify option CONFIG_DTB_FILE on the
command line or manually add it into .config.
Add Kconfig entry: CONFIG_DTB_FILE
to be able to provide the path to DTB we want to embed
into Xen image. If no path provided - the dtb will not
be embedded.
Remove the line: AFLAGS-y += -DCONFIG_DTB_FILE=\"$(CONFIG_DTB_FILE)\"
as it is not needed since Kconfig will define it in a header
with all the other config options.
Move definition of _sdtb into dtb.S to prevent defining it
if there is no reference to it or if someone protects
_sdtb with #ifdef rather than with .ifnes. If the latter,
we will get a compiler error.
xen: introduce XENFEAT_direct_mapped and XENFEAT_not_direct_mapped
Introduce two feature flags to tell the domain whether it is
direct-mapped or not. It allows the guest kernel to make informed
decisions on things such as swiotlb-xen enablement.
The introduction of both flags (XENFEAT_direct_mapped and
XENFEAT_not_direct_mapped) allows the guest kernel to avoid any
guesswork if one of the two is present, or fallback to the current
checks if neither of them is present.
XENFEAT_direct_mapped is always set for not auto-translated guests.
For auto-translated guests, only Dom0 on ARM is direct-mapped. Also,
see is_domain_direct_mapped() which refers to auto-translated guests:
xen/include/asm-arm/domain.h:is_domain_direct_mapped
xen/include/asm-x86/domain.h:is_domain_direct_mapped
Bertrand Marquis [Mon, 15 Mar 2021 10:38:30 +0000 (10:38 +0000)]
xen/arm: Use register_t type of cpuinfo entries
All cpu identification registers that we store in the cpuinfo structure
are 64bit on arm64 and 32bit on arm32 so storing the values in 32bit on
arm64 is removing the higher bits which might contain information in the
future.
This patch is changing the types in cpuinfo to register_t (which is
32bit on arm32 and 64bit on arm64) and adding the necessary paddings
inside the unions.
For consistency uint64_t entries are also changed to register_t on 64bit
systems.
It is also fixing all prints using directly the bits values from cpuinfo
to use PRIregister and adapt the printed value to print all bits
available on the architecture.
Norbert Manthey [Fri, 26 Feb 2021 14:41:39 +0000 (15:41 +0100)]
xenstore: handle daemon creation errors
In rare cases, the path to the daemon socket cannot be created as it is
longer than PATH_MAX. Instead of failing with a NULL pointer dereference,
terminate the application with an error message.
This bug was discovered and resolved using Coverity Static Analysis
Security Testing (SAST) by Synopsys, Inc.
Norbert Manthey [Fri, 26 Feb 2021 14:41:38 +0000 (15:41 +0100)]
xenstore_client: handle memory on error
In case a command fails, also free the memory. As this is for the CLI
client, currently the leaked memory is freed right after receiving the
error, as the application terminates next.
Similarly, if the allocation fails, do not use the NULL pointer
afterwards, but instead error out.
This bug was discovered and resolved using Coverity Static Analysis
Security Testing (SAST) by Synopsys, Inc.
Julien Grall [Sat, 20 Feb 2021 17:54:13 +0000 (17:54 +0000)]
xen/arm: mm: flush_page_to_ram() only need to clean to PoC
At the moment, flush_page_to_ram() is both cleaning and invalidate to
PoC the page.
The goal of flush_page_to_ram() is to prevent corruption when the guest
has disabled the cache (the cache line may be dirty) and the guest to
read previous content.
Per this definition, the invalidating the line is not necessary. So
invalidating the cache is unnecessary. In fact, it may be counter-
productive as the line may be (speculatively) accessed a bit after.
So this will incurr an expensive access to the memory.
More generally, we should avoid interferring too much with cache.
Therefore, flush_page_to_ram() is updated to only clean to PoC the page.
The performance impact of this change will depend on your
workload/processor.
Jan Beulich [Thu, 1 Apr 2021 14:44:24 +0000 (16:44 +0200)]
x86/EFI: drop stale section special casing when generating base relocs
As of commit a6066af5b142 ("xen/init: Annotate all command line
parameter infrastructure as const") .init.setup has been part of .init.
As of commit 544ad7f5caf5 ("xen/init: Move initcall infrastructure into
.init.data") .initcall* have been part of .init. Hence neither can be
encountered as a stand-alone section in the final binaries anymore.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Jan Beulich [Thu, 1 Apr 2021 14:43:50 +0000 (16:43 +0200)]
x86/ucode: log blob date also for AMD
Like Intel, AMD also records the date in their blobs. The field was
merely misnamed as "data_code" so far; this was perhaps meant to be
"date_code". Split it into individual fields, just like we did for Intel
some time ago, and extend the message logged after a successful update.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
x86/vioapic: issue EOI to dpci when switching pin to edge trigger mode
When an IO-APIC pin is switched from level to edge trigger mode the
IRR bit is cleared, so it can be used as a way to EOI an interrupt at
the IO-APIC level.
Such EOI however does not get forwarded to the dpci code like it's
done for the local APIC initiated EOI. This change adds the code in
order to notify dpci of such EOI, so that dpci and the interrupt
controller are in sync.
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
x86/vioapic: top word redir entry writes don't trigger interrupts
Top word writes just update the destination of the interrupt, but
since there's no change on the masking or the triggering mode no
guest interrupt injection can result of such write. Add an assert to
that effect.
Requested-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com>
George Dunlap [Wed, 24 Mar 2021 17:24:31 +0000 (17:24 +0000)]
CHANGELOG.md: Make PV shim smaller by factoring out HVM-specific shadow code
Signed-off-by: George Dunlap <george.dunlap@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com> Release-acked-by: Ian Jackson <ian.jackson@citrix.com>
George Dunlap [Wed, 24 Mar 2021 13:24:45 +0000 (13:24 +0000)]
CHANGELOG.md: Add entries for emulation
Signed-off-by: George Dunlap <george.dunlap@citrix.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Release-acked-by: Ian Jackson <ian.jackson@citrix.com>
George Dunlap [Wed, 24 Mar 2021 16:20:28 +0000 (16:20 +0000)]
CHANGELOG.md: Add entries for CI loop
Signed-off-by: George Dunlap <george.dunlap@citrix.com> Acked-by: Stefano Stabellini <sstabellini@kernel.org> Release-acked-by: Ian Jackson <ian.jackson@citrix.com>
George Dunlap [Tue, 23 Mar 2021 17:06:20 +0000 (17:06 +0000)]
CHANGELOG.md: NetBSD lib/gnttab support
Signed-off-by: George Dunlap <george.dunlap@citrix.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Release-acked-by: Ian Jackson <ian.jackson@citrix.com>
George Dunlap [Tue, 23 Mar 2021 16:58:42 +0000 (16:58 +0000)]
CHANGELOG.md: Add dom0/domU zstd compression support
Signed-off-by: George Dunlap <george.dunlap@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com> Release-acked-by: Ian Jackson <ian.jackson@citrix.com>
George Dunlap [Tue, 23 Mar 2021 16:52:25 +0000 (16:52 +0000)]
CHANGELOG.md: Add named PCI devices
Signed-off-by: George Dunlap <george.dunlap@citrix.com> Reviewed-by: Paul Durrant <paul@xen.org> Release-acked-by: Ian Jackson <ian.jackson@citrix.com>
George Dunlap [Tue, 23 Mar 2021 13:55:57 +0000 (13:55 +0000)]
Intel Processor Trace Support: Add CHANGELOG.md and SUPPORT.md entries
Signed-off-by: George Dunlap <george.dunlap@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com> Release-acked-by: Ian Jackson <ian.jackson@citrix.com>
Jan Beulich [Tue, 30 Mar 2021 13:32:59 +0000 (15:32 +0200)]
x86/shadow: replace stale literal numbers in hash_{vcpu,domain}_foreach()
15 apparently once used to be the last valid type to request a callback
for, and the dimension of the respective array. The arrays meanwhile are
larger than this (in a benign way, i.e. no caller ever sets a mask bit
higher than 15), dimensioned by SH_type_unused. Have the ASSERT()s
follow suit and add build time checks at the call sites.
Sadly at least some Clang versions aren't as flexible with
_Static_assert() as gcc is - they demand a truly integer constant
expression, while gcc also permits constant variables.
Also adjust a comment naming the wrong of the two functions.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Tim Deegan <tim@xen.org>
Jan Beulich [Tue, 30 Mar 2021 12:40:24 +0000 (14:40 +0200)]
VT-d: restore flush hooks when disabling qinval
Leaving the hooks untouched is at best a latent risk: There may well be
cases where some flush is needed, which then needs carrying out the
"register" way.
Switch from u<N> to uint<N>_t while needing to touch the function
headers anyway.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Jan Beulich [Tue, 30 Mar 2021 12:39:54 +0000 (14:39 +0200)]
VT-d: re-order register restoring in vtd_resume()
For one FECTL must be written last - the interrupt shouldn't be unmasked
without first having written the data and address needed to actually
deliver it. In the common case (when dma_msi_set_affinity() doesn't end
up bailing early) this happens from init_vtd_hw(), but for this to
actually have the intended effect we shouldn't subsequently overwrite
what was written there - this is only benign when old and new settings
match. Instead we should restore the registers ahead of calling
init_vtd_hw(), just for the unlikely case of dma_msi_set_affinity()
bailing early.
In the moved code drop some stray casts as well.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Jan Beulich [Tue, 30 Mar 2021 12:38:45 +0000 (14:38 +0200)]
x86: fix build when NR_CPUS == 1
In this case the compiler is recognizing that no valid array indexes
remain (in x2apic_cluster()'s access to per_cpu(cpu_2_logical_apicid,
...)), but oddly enough isn't really consistent about the checking it
does (see the code comment).
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Using RING_GET_RESPONSE() on a shared ring is easy to use incorrectly
(i.e., by not considering that the other end may alter the data in the
shared ring while it is being inspected). Safe usage of a response
generally requires taking a local copy.
Provide a RING_COPY_RESPONSE() macro to use instead of
RING_GET_RESPONSE() and an open-coded memcpy(). This takes care of
ensuring that the copy is done correctly regardless of any possible
compiler optimizations.
Use a volatile source to prevent the compiler from reordering or
omitting the copy.
This generalizes similar RING_COPY_REQUEST() macro added in 3f20b8def0.
Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com> Reviewed-by: Juergen Gross <jgross@suse.com>
Jan Beulich [Tue, 30 Mar 2021 12:33:48 +0000 (14:33 +0200)]
xen/decompress: make helper symbols static
The individual decompression CUs need to only surface their top level
functions to other code. Arrange for everything else to be static, to
make sure no undue uses of that code exist or will appear without
explicitly noticing. (In some cases this also results in code size
reduction, but since this is all init-only code this probably doesn't
matter very much.)
In the LZO case also take the opportunity and convert u8 where lines
get touched anyway.
The downside is that the top level functions will now be non-static
in stubdom builds of libxenguest, but I think that's acceptable. This
does require declaring them first, though, as the compiler warns about
the lack of declarations.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Wei Liu <wl@xen.org>