Hongyan Xia [Tue, 5 Feb 2019 17:04:56 +0000 (17:04 +0000)]
x86/mm: drop old page table APIs
Two sets of old APIs, alloc/free_xen_pagetable() and lXe_to_lYe(), are
now dropped to avoid the dependency on direct map.
There are two special cases which still have not been re-written into
the new APIs, thus need special treatment:
rpt in smpboot.c cannot use ephemeral mappings yet. The problem is that
rpt is read and written in context switch code, but the mapping
infrastructure is NOT context-switch-safe, meaning we cannot map rpt in
one domain and unmap in another. Before the mapping infrastructure
supports context switches, rpt has to be globally mapped.
Also, lXe_to_lYe() during Xen image relocation cannot be converted into
map/unmap pairs. We cannot hold on to mappings while the mapping
infrastructure is being relocated! It is enough to remove the direct map
in the second e820 pass, so we still use the direct map (<4GiB) in Xen
relocation (which is during the first e820 pass).
Signed-off-by: Wei Liu <wei.liu2@citrix.com> Signed-off-by: Hongyan Xia <hongyxia@amazon.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Wei Liu [Mon, 4 Feb 2019 17:57:33 +0000 (17:57 +0000)]
x86/smpboot: switch clone_mapping() to new APIs
Signed-off-by: Wei Liu <wei.liu2@citrix.com> Signed-off-by: Hongyan Xia <hongyxia@amazon.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Changed in v10:
- switch to unmap_domain_page() for pl3e in the middle because it is
guaranteed to be overwritten later.
Changed in v7:
- change patch title
- remove initialiser of pl3e.
- combine the initialisation of pl3e into a single assignment.
- use the new alloc_map_clear() helper.
- use the normal map_domain_page() in the error path.
Wei Liu [Mon, 4 Feb 2019 17:48:45 +0000 (17:48 +0000)]
x86/smpboot: add exit path for clone_mapping()
We will soon need to clean up page table mappings in the exit path.
No functional change.
Signed-off-by: Wei Liu <wei.liu2@citrix.com> Signed-off-by: Hongyan Xia <hongyxia@amazon.com> Acked-by: Jan Beulich <jbeulich@suse.com>
---
Changed in v7:
- edit commit message.
- begin with rc = 0 and set it to -ENOMEM ahead of if().
Wei Liu [Mon, 4 Feb 2019 17:00:59 +0000 (17:00 +0000)]
efi: switch to new APIs in EFI code
Signed-off-by: Wei Liu <wei.liu2@citrix.com> Signed-off-by: Hongyan Xia <hongyxia@amazon.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Changed in v7:
- add blank line after declaration.
- rename efi_l4_pgtable into efi_l4t.
- pass the mapped efi_l4t to copy_mapping() instead of map it again.
- use the alloc_map_clear_xen_pt() API.
- unmap pl3e, pl2e, l1t earlier.
Wei Liu [Mon, 4 Feb 2019 16:01:03 +0000 (16:01 +0000)]
efi: use new page table APIs in copy_mapping
Signed-off-by: Wei Liu <wei.liu2@citrix.com> Signed-off-by: Hongyan Xia <hongyxia@amazon.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Changed in v8:
- remove redundant commit message.
- unmap l3src based on va instead of mfn.
- re-structure if condition around l3dst.
Changed in v7:
- hoist l3 variables out of the loop to avoid repetitive mappings.
Wei Liu [Thu, 31 Jan 2019 18:49:36 +0000 (18:49 +0000)]
x86_64/mm: switch to new APIs in setup_m2p_table
While doing so, avoid repetitive mapping of l2_ro_mpt by keeping it
across loops, and only unmap and map it when crossing 1G boundaries.
Signed-off-by: Wei Liu <wei.liu2@citrix.com> Signed-off-by: Hongyan Xia <hongyxia@amazon.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Changed in v8:
- re-structure if condition around l2_ro_mpt.
- reword the commit message.
Changed in v7:
- avoid repetitive mapping of l2_ro_mpt.
- edit commit message.
- switch to alloc_map_clear_xen_pt().
Wei Liu [Tue, 29 Jan 2019 14:03:48 +0000 (14:03 +0000)]
x86/mm: switch to new APIs in modify_xen_mappings
Page tables allocated in that function should be mapped and unmapped
now.
Note that pl2e now maybe mapped and unmapped in different iterations, so
we need to add clean-ups for that.
Signed-off-by: Wei Liu <wei.liu2@citrix.com> Signed-off-by: Hongyan Xia <hongyxia@amazon.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Changed in v7:
- use normal unmap in the error path.
Wei Liu [Tue, 29 Jan 2019 13:56:43 +0000 (13:56 +0000)]
x86/mm: switch to new APIs in map_pages_to_xen
Page tables allocated in that function should be mapped and unmapped
now.
Take the opportunity to avoid a potential double map in
map_pages_to_xen() by initialising pl1e to NULL and only map it if it
was not mapped earlier.
Signed-off-by: Wei Liu <wei.liu2@citrix.com> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
---
Changed in v10:
- avoid a potential double map.
- drop RoB due to this change.
Wei Liu [Tue, 29 Jan 2019 12:42:23 +0000 (12:42 +0000)]
x86/mm: rewrite virt_to_xen_l*e
Rewrite those functions to use the new APIs. Modify its callers to unmap
the pointer returned. Since alloc_xen_pagetable_new() is almost never
useful unless accompanied by page clearing and a mapping, introduce a
helper alloc_map_clear_xen_pt() for this sequence.
Signed-off-by: Wei Liu <wei.liu2@citrix.com> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
---
Changed in v10:
- remove stale include.
- s/alloc_map_clear_xen_pt/alloc_mapped_pagetable/g.
- fix mis-hunks.
Changed in v9:
- use domain_page_map_to_mfn() around the L3 table locking logic.
- remove vmap_to_mfn() changes since we now use xen_map_to_mfn().
Changed in v8:
- s/virtual address/linear address/.
- BUG_ON() on NULL return in vmap_to_mfn().
Changed in v7:
- remove a comment.
- use l1e_get_mfn() instead of converting things back and forth.
- add alloc_map_clear_xen_pt().
- unmap before the next mapping to reduce mapcache pressure.
- use normal unmap calls instead of the macro in error paths because
unmap can handle NULL now.
xen/arm: smmuv1: Revert associating the group pointer with the S2CR
Revert the code that associates the group pointer with the S2CR as this
code causing an issue when the SMMU device has more than one master
device with same stream-id. This issue is introduced by commit 0435784cc75d ("xen/arm: smmuv1: Intelligent SMR allocation”
Reverting the code will not impact to use of SMMU if two devices use the
same stream-id but each device will be in a separate group. This is the same
behaviour before the code is merged.
Julien Grall [Tue, 16 Jun 2020 15:33:12 +0000 (16:33 +0100)]
xen/arm64: Place a speculation barrier following an ret instruction
Some CPUs can speculate past a RET instruction and potentially perform
speculative accesses to memory before processing the return.
There is no known gadget available after the RET instruction today.
However some of the registers (such as in check_pending_guest_serror())
may contain a value provided by the guest.
In order to harden the code, it would be better to add a speculation
barrier after each RET instruction. The performance impact is meant to
be negligeable as the speculation barrier is not meant to be
architecturally executed.
Rather than manually inserting a speculation barrier, use a macro
which overrides the mnemonic RET and replace with RET + SB. We need to
use the opcode for RET to prevent any macro recursion.
This patch is only covering the assembly code. C code would need to be
covered separately using the compiler support.
Note that the definition of the macros sb needs to be moved earlier in
asm-arm/macros.h so it can be used by the new macro.
This is part of the work to mitigate straight-line speculation.
Current interrupt pass though code will setup a timer for each
interrupt injected to the guest that requires an EOI from the guest.
Such timer would perform two actions if the guest doesn't EOI the
interrupt before a given period of time. The first one is deasserting
the virtual line, the second is perform an EOI of the physical
interrupt source if it requires such.
The deasserting of the guest virtual line is wrong, since it messes
with the interrupt status of the guest. This seems to have been done
in order to compensate for missing deasserts when certain interrupt
controller actions are performed. The original motivation of the
introduction of the timer was to fix issues when a GSI was shared
between different guests. We believe that other changes in the
interrupt handling code (ie: proper propagation of EOI related actions
to dpci) will have fixed such errors now.
Performing an EOI of the physical interrupt source is redundant, since
there's already a timer that takes care of this for all interrupts,
not just the HVM dpci ones, see irq_guest_action_t struct eoi_timer
field.
Since both of the actions performed by the dpci timer are not
required, remove it altogether.
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
When pins are cleared from either ISR or IRR as part of the
initialization sequence forward the clearing of those pins to the dpci
EOI handler, as it is equivalent to an EOI. Not doing so can bring the
interrupt controller state out of sync with the dpci handling logic,
that expects a notification when a pin has been EOI'ed.
Fixes: 7b3cb5e5416 ('IRQ injection changes for HVM PCI passthru.') Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
x86/vpic: don't trigger unmask event until end of init
Wait until the end of the init sequence to trigger the unmask event.
Note that it will be unconditionally triggered, but that's harmless if
not unmask actually happened.
While there change the variable type to bool.
Suggested-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com>
x86/vpic: force int output to low when in init mode
When the PIC is on the init sequence prevent interrupt delivery. The
state of the registers is in the process of being set during the init
phase, so it makes sense to prevent any int line changes during that
process.
Suggested-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com>
Jan Beulich [Mon, 19 Apr 2021 13:29:39 +0000 (15:29 +0200)]
x86/CPUID: add further "fast repeated string ops" feature flags
Like ERMS this can always be exposed to guests, but I guess once we
introduce full validation we want to make sure we don't reject incoming
policies with any of these set when in the raw/host policies they're
clear.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Jan Beulich [Mon, 19 Apr 2021 13:26:22 +0000 (15:26 +0200)]
x86/shadow: adjust callback arrays
Some of them have entries with stale comments. Rather than correcting
these comments, re-arrange how these arrays get populated: Use dedicated
element initializers, serving the purpose of what the comments did so
far. This then also makes these arrays independent of the actual
ordering of the individual SH_type_*.
While tightening respective ASSERT()s in hash_{vcpu,domain}_foreach(),
also tighten related ones in shadow_hash_{insert,delete}().
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Tim Deegan <tim@xen.org>
Andrew Cooper [Fri, 16 Apr 2021 15:56:57 +0000 (16:56 +0100)]
tools: Drop XGETTEXT from Tools.mk.in
This hunk was missing from the work to drop gettext as a build dependency.
Fixes: e21a6a4f96 ("tools: Drop gettext as a build dependency") Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Julien Grall <jgrall@amazon.com>
xen/arm: guest_walk: Only generate necessary offsets/masks
At the moment, we are computing offsets/masks for each level and
granularity. This is a bit of waste given that we only need to
know the offsets/masks for the granularity used by the guest.
All the LPAE information can easily be inferred with just the
page shift for a given granularity and the level.
So rather than providing a set of helpers per granularity, we can
provide a single set that takes the granularity and the level in
parameters.
With the new helpers in place, we can rework guest_walk_ld() to
only compute necessary information.
Julien Grall [Sat, 23 Jan 2021 17:48:45 +0000 (17:48 +0000)]
xen/arm: Include asm/asm-offsets.h and asm/macros.h on every assembly files
In a follow-up patch we may want to automatically replace some
mnemonics (such as ret) with a different sequence.
To ensure all the assembly files will include asm/macros.h it is best to
automatically include it on single assembly. This can be done via
config.h.
It was necessary to include a few more headers as dependency:
- <asm/asm_defns.h> to define sizeof_*
- <xen/page-size.h> which is already a latent issue given STACK_ORDER
rely on PAGE_SIZE.
Unfortunately the build system will use -D__ASSEMBLY__ when generating
the linker script. A new option -D__LINKER__ is introduceed and used for
the linker script to avoid including headers (such as asm/macros.h) that
may not be compatible with the syntax.
Lastly, take the opportunity to remove both asm/asm-offsets.h and
asm/macros.h from the various assembly files as they are now
automagically included.
Andrew Cooper [Thu, 29 Oct 2020 19:53:28 +0000 (19:53 +0000)]
x86/pv: Improve dom0_update_physmap() with CONFIG_SPECULATIVE_HARDEN_BRANCH
dom0_update_physmap() is mostly called in two tight loops, where the lfences
hidden in is_pv_32bit_domain() have a substantial impact.
None of the boot time construction needs protection against malicious
speculation, so use a local variable and calculate is_pv_32bit_domain() just
once.
Reformat the some of the code for legibility, now that the volume has reduced,
and removal of some gratuitous negations.
No functional change.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Jan Beulich [Fri, 16 Apr 2021 12:44:01 +0000 (14:44 +0200)]
string: drop redundant declarations
These standard functions shouldn't need custom declarations. The only
case where redundancy might be needed is if there were inline functions
there. But we don't have any here (anymore). Prune the per-arch headers
of duplicate declarations while moving the asm/string.h inclusion past
the declarations.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Julien Grall <jgrall@amazon.com>
Jan Beulich [Fri, 16 Apr 2021 12:43:10 +0000 (14:43 +0200)]
lib: move 64-bit div/mod compiler helpers
These were built for 32-bit architectures only (the same code could,
with some tweaking, sensibly be used to provide TI-mode helpers on
64-bit arch-es) - retain this property, while still avoiding to have
a CU without any contents at all. For this, Arm's CONFIG_64BIT gets
generalized.
Note that we imply "32-bit arch" to be the same as BITS_PER_LONG == 32,
i.e. we aren't (not just here) prepared to have a 64-bit arch with
BITS_PER_LONG == 32. Yet even if we supported such, likely the compiler
would get away there without invoking these helpers, so the code would
remain unused in practice.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Julien Grall <jgrall@amazon.com>
Jan Beulich [Fri, 16 Apr 2021 12:41:48 +0000 (14:41 +0200)]
lib: move muldiv64()
Make this a separate archive member under lib/. While doing so, don't
move latently broken x86 assembly though: Fix the constraints, such
that properly extending inputs to 64-bit won't just be a side effect of
needing to copy registers, and such that we won't fail to clobber %rdx.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Jan Beulich [Fri, 16 Apr 2021 12:37:36 +0000 (14:37 +0200)]
bunzip: replace INIT
While tools/libs/guest/xg_private.h has its own (non-conflicting for our
purposes) __init, which hence needs to be #undef-ed, there's no other
need for this abstraction.
Requested-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Julien Grall <jgrall@amazon.com>
Commit e1de4c196a2e ("x86/timer: Fix boot on Intel systems using ITSSPRC
static PIT clock gating") was reported to cause boot failures on certain
AMD Ryzen systems.
Refine the fix to do nothing in the default case, and only attempt to
configure legacy replacement mode if IRQ0 is found to not be working. If
legacy replacement mode doesn't help, undo it before falling back to other IRQ
routing configurations.
In addition, introduce a "hpet" command line option so this heuristic
can be overridden. Since it makes little sense to introduce just
"hpet=legacy-replacement", also allow for a boolean argument as well as
"broadcast" to replace the separate "hpetbroadcast" option.
Reported-by: Frédéric Pierret frederic.pierret@qubes-os.org Signed-off-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Tested-by: Frédéric Pierret <frederic.pierret@qubes-os.org>
Andrew Cooper [Wed, 24 Mar 2021 14:33:04 +0000 (14:33 +0000)]
x86/hpet: Factor hpet_enable_legacy_replacement_mode() out of hpet_setup()
... in preparation to introduce a second caller.
No functional change.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Tested-by: Frédéric Pierret <frederic.pierret@qubes-os.org>
It was committed despite multiple objections. The agreed upon fix is a
different variation of the same original patch, and the delta between the two
is far from clear.
By reverting this commit first, the fixes are clear and coherent as individual
patches, and in the appropriate form for backport to the older trees.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
xen/arm: Prevent Dom0 to be loaded when using dom0less
This patch prevents the dom0 to be loaded skipping its
building and going forward to build domUs when the dom0
kernel is not found and at least one domU is present.
xen/arm: Clarify how the domid is decided in create_domUs()
This patch adds a comment in create_domUs() right before
domain_create() to explain the importance of the pre-increment
operator on the variable max_init_domid, to ensure that the
domid 0 is allocated only during start_xen() function by the
create_dom0() and not on any other possible code path to the
domain_create() function.
xen/arm: xen/arm: Reinforce use of is_hardware_domain
There are a few places on Arm where we use pretty much an open-coded
version of is_hardware_domain(). The main difference, is the helper
will also block speculation (not yet implemented on Arm).
The existing users are not in hot path, so blocking speculation
would not hurt when it is implemented. So remove the open-coded
version within the arm codebase.
Jan Beulich [Thu, 15 Apr 2021 11:43:51 +0000 (13:43 +0200)]
x86: avoid building COMPAT code when !HVM && !PV32
It was probably a mistake to, over time, drop various CONFIG_COMPAT
conditionals from x86-specific code, as we now have a build
configuration again where we'd prefer this to be unset. Arrange for
CONFIG_COMPAT to actually be off in this case, dealing with fallout.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Wei Liu <wl@xen.org>
Jan Beulich [Thu, 15 Apr 2021 11:35:32 +0000 (13:35 +0200)]
x86: slim down hypercall handling when !PV32
In such a build various of the compat handlers aren't needed. Don't
reference them from the hypercall table, and compile out those which
aren't needed for HVM. Also compile out switch_compat(), which has no
purpose in such a build.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Wei Liu <wl@xen.org>
Jan Beulich [Thu, 15 Apr 2021 11:34:29 +0000 (13:34 +0200)]
x86: don't build unused entry code when !PV32
Except for the initial part of cstar_enter compat/entry.S is all dead
code in this case. Further, along the lines of the PV conditionals we
already have in entry.S, make code PV32-conditional there too (to a
fair part because this code actually references compat/entry.S).
This has the side effect of moving the tail part (now at compat_syscall)
of the code out of .text.entry (in line with e.g. compat_sysenter).
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Wei Liu <wl@xen.org>
Andrew Cooper [Fri, 2 Apr 2021 13:10:25 +0000 (14:10 +0100)]
x86/cpuid: Advertise no-lmsl unilaterally to hvm guests
While part of the original AMD64 spec, Long Mode Segment Limit was a feature
not picked up by Intel, and therefore didn't see much adoption in software.
AMD have finally dropped the feature from hardware, and allocated a CPUID bit
to indicate its absence.
Xen has never supported the feature for guests, even when running on capable
hardware, so advertise the feature's absence unilaterally.
There is nothing specifically wrong with exposing this bit to PV guests, but
the PV ABI doesn't include a working concept of MSR_EFER in the first place,
so exposing it to PV guests would be out-of-place.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Jan Beulich [Tue, 13 Apr 2021 08:18:08 +0000 (10:18 +0200)]
x86/EPT: minor local variable adjustment in ept_set_entry()
Not having direct_mmio (used only once anyway) as a local variable gets
the epte_get_entry_emt() invocation here in better sync with the other
ones. While at it also reduce ipat's scope.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Jan Beulich [Tue, 13 Apr 2021 08:14:23 +0000 (10:14 +0200)]
VT-d: improve save/restore of registers across S3
The static allocation of the save space is not only very inefficient
(most of the array slots won't ever get used), but is also the sole
reason for a build-time upper bound on the number of IOMMUs. Introduce
a structure containing just the one needed field we can't (easily)
restore from other in-memory state, and allocate the respective
array dynamically.
Take the opportunity and make the FEUADDR write dependent upon
x2apic_enabled, like is already the case in dma_msi_set_affinity().
Also alter properties of nr_iommus: static, unsigned, and __initdata.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Jan Beulich [Mon, 12 Apr 2021 10:37:19 +0000 (12:37 +0200)]
x86/shadow: adjust is_pv_*() checks
To cover for "x86: correct is_pv_domain() when !CONFIG_PV" (or any other
change along those lines) we should prefer is_hvm_*(), as it may become
a build time constant while is_pv_*() generally won't.
Also when a domain pointer is in scope, prefer is_*_domain() over
is_*_vcpu().
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Tim Deegan <tim@xen.org>
Jan Beulich [Mon, 12 Apr 2021 10:34:04 +0000 (12:34 +0200)]
x86/shadow: only 4-level guest code needs building when !HVM
In order to limit #ifdef-ary, provide "stub" #define-s for
SH_type_{l1,fl1,l2}_{32,pae}_shadow and SHF_{L1,FL1,L2}_{32,PAE}.
The change in shadow_vcpu_init() is necessary to cover for "x86: correct
is_pv_domain() when !CONFIG_PV" (or any other change along those lines)
- we should only rely on is_hvm_*() to become a build time constant.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Tim Deegan <tim@xen.org>
Jan Beulich [Mon, 12 Apr 2021 10:32:50 +0000 (12:32 +0200)]
x86/shadow: SH_type_l2h_shadow is PV-only
..., i.e. being used only with 4 guest paging levels. Drop its L2/PAE
alias and adjust / drop conditionals. Use >= 4 where touching them
anyway, in preparation for 5-level paging.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Tim Deegan <tim@xen.org>
Jan Beulich [Mon, 12 Apr 2021 10:32:18 +0000 (12:32 +0200)]
x86/shadow: don't open-code SHF_* shorthands
Use SHF_L1_ANY, SHF_32, SHF_PAE, as well as SHF_64, and introduce
SHF_FL1_ANY.
Note that in shadow_audit_tables() this has the effect of no longer
(I assume mistakenly, or else I don't see why the respective callback
table entry isn't NULL) excluding SHF_L2H_64.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Tim Deegan <tim@xen.org>
Jan Beulich [Mon, 12 Apr 2021 10:31:19 +0000 (12:31 +0200)]
x86/shadow: move shadow_set_l<N>e() to their own source file
The few GUEST_PAGING_LEVELS dependencies (of shadow_set_l2e() only) can
be easily expressed by function parameters; I suppose the extra indirect
call is acceptable for the increasingly little used 32-bit non-PAE case.
This way shadow_set_l[12]e(), each of which compiles to almost 1k of
code, need building just once.
The implication is the need for some "relaxation" in types.h: The
underlying PTE types don't vary anymore (and aren't expected to down the
road), so they as well as some basic helpers can be exposed even in the
new, artificial GUEST_PAGING_LEVELS == 0 case.
Almost pure code movement - exceptions are the conversion of
"#if GUEST_PAGING_LEVELS == 2" to runtime conditionals and style
corrections (including to avoid open-coding mfn_to_maddr() and
PAGE_OFFSET()).
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Tim Deegan <tim@xen.org>
Jan Beulich [Mon, 12 Apr 2021 10:30:13 +0000 (12:30 +0200)]
x86/shadow: polish shadow_write_entries()
First of all, avoid the initial dummy write: Try to write the actual
new value instead, and start the loop from 1 if this was successful.
Further, drop safe_write_entry() and use write_atomic() instead. This
eliminates the need for the BUILD_BUG_ON() there at the same time.
Then
- use const and unsigned,
- drop a redundant NULL check,
- don't open-code PAGE_OFFSET() and IS_ALIGNED(),
- adjust comment style.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Tim Deegan <tim@xen.org>
Jan Beulich [Mon, 12 Apr 2021 10:26:54 +0000 (12:26 +0200)]
gunzip: drop INIT{,DATA} and STATIC
There's no need for the extra abstraction.
Requested-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Jan Beulich [Mon, 12 Apr 2021 10:26:18 +0000 (12:26 +0200)]
libxenguest: simplify kernel decompression
In all cases the kernel build makes available the uncompressed size in
the final 4 bytes of the bzImage payload. Utilize this to avoid
repeated realloc()ing of the output buffer.
As a side effect this also addresses the previous mistaken return of 0
(success) from xc_try_{bzip2,lzma,xz}_decode() in case
xc_dom_register_external() would have failed.
As another side effect this also addresses the first error path of
_xc_try_lzma_decode() previously bypassing lzma_end().
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Wei Liu <wl@xen.org>
Andrew Cooper [Sat, 16 Jan 2021 16:09:10 +0000 (16:09 +0000)]
xen/xsm: Improve alloc/free of evtchn buckets
Currently, flask_alloc_security_evtchn() is called in loops of
64 (EVTCHNS_PER_BUCKET), which for non-dummy implementations is a function
pointer call even in the no-op case. The non no-op case only sets a single
constant, and doesn't actually fail.
Spectre v2 protections has made function pointer calls far more expensive, and
64 back-to-back calls is a waste. Rework the APIs to pass the size of the
bucket instead, and call them once.
No practical change, but {alloc,free}_evtchn_bucket() should be rather more
efficient now.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Daniel P. Smith <dpsmith@apertussolutions.com>
xen/gunzip: Fix build with clang after 33bc2a8495f7
The compilation will fail when building Xen with clang and
CONFIG_DEBUG=y:
make[4]: Leaving directory '/oss/xen/xen/common/libelf'
INIT_O gunzip.init.o
Error: size of gunzip.o:.text is 0x00000019
This is because the function init_allocator() will not be inlined
and is not part of the init section.
Fix it by marking init_allocator() with INIT.
Fixes: 33bc2a8495f7 ("xen/gunzip: Allow perform_gunzip() to be called multiple times") Reported-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: Julien Grall <jgrall@amazon.com> Acked-by: Jan Beulich <jbeulich@suse.com>
There is a difference in generated code: xzalloc_bytes() forces
SMP_CACHE_BYTES alignment. I think we not only don't need this here, but
actually don't want it.
To avoid the need to add a cast, do away with the only forward-declared
struct hypfs_dyndata.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Juergen Gross <jgross@suse.com>
There is a difference in generated code: xzalloc_bytes() forces
SMP_CACHE_BYTES alignment. I think we not only don't need this here, but
actually don't want it.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Drop hvm_irq_size(), which exists for just this purpose.
There is a difference in generated code: xzalloc_bytes() forces
SMP_CACHE_BYTES alignment. I think we not only don't need this here, but
actually don't want it.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Boris Ostrovsky [Fri, 9 Apr 2021 07:22:04 +0000 (09:22 +0200)]
x86/vpt: simplify locking argument to write_{,un}lock
Make pt_adjust_vcpu() call write_{,un}lock with less indirection, like
create_periodic_time() already does.
Requested-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Boris Ostrovsky [Fri, 9 Apr 2021 07:21:27 +0000 (09:21 +0200)]
x86/vpt: do not take pt_migrate rwlock in some cases
Commit 8e76aef72820 ("x86/vpt: fix race when migrating timers between
vCPUs") addressed XSA-336 by introducing a per-domain rwlock that was
intended to protect periodic timer during VCPU migration. Since such
migration is an infrequent event no performance impact was expected.
Unfortunately this turned out not to be the case: on a fairly large
guest (92 VCPUs) we've observed as much as 40% TPCC performance
regression with some guest kernels. Further investigation pointed to
pt_migrate read lock taken in pt_update_irq() as the largest contributor
to this regression. With large number of VCPUs and large number of VMEXITs
(from where pt_update_irq() is always called) the update of an atomic in
read_lock() is thought to be the main cause.
Stephen Brennan analyzed locking pattern and classified lock users as
follows:
1. Functions which read (maybe write) all periodic_time instances attached
to a particular vCPU. These are functions which use pt_vcpu_lock() such
as pt_restore_timer(), pt_save_timer(), etc.
2. Functions which want to modify a particular periodic_time object.
These functions lock whichever vCPU the periodic_time is attached to, but
since the vCPU could be modified without holding any lock, they are
vulnerable to XSA-336. Functions in this group use pt_lock(), such as
pt_timer_fn() or destroy_periodic_time().
3. Functions which not only want to modify the periodic_time, but also
would like to modify the =vcpu= fields. These are create_periodic_time()
or pt_adjust_vcpu(). They create XSA-336 conditions for group 2, but we
can't simply hold 2 vcpu locks due to the deadlock risk.
Roger then pointed out that group 1 functions don't really need to hold
the pt_migrate rwlock and that instead groups 2 and 3 should hold per-vcpu
lock whenever they modify per-vcpu timer lists.
Suggested-by: Stephen Brennan <stephen.s.brennan@oracle.com> Suggested-by: Roger Pau Monné <roger.pau@citrix.com> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Stephen Brennan <stephen.s.brennan@oracle.com>
The for loop in unmap_domain_pirq is unnecessary complicated, with
several places where the index is incremented, and also different
exit conditions spread between the loop body.
Simplify it by looping over each possible PIRQ using the for loop
syntax, and remove all possible in-loop exit points.
No functional change intended.
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Jan Beulich [Fri, 9 Apr 2021 07:20:15 +0000 (09:20 +0200)]
x86/shadow: encode full GFN in magic MMIO entries
Since we don't need to encode all of the PTE flags, we have enough bits
in the shadow entry to store the full GFN. Limit use of literal numbers
a little and instead derive some of the involved values. Sanity-check
the result via BUILD_BUG_ON()s.
This then allows dropping from sh_l1e_mmio() again the guarding against
too large GFNs. It needs replacing by an L1TF safety check though, which
in turn requires exposing cpu_has_bug_l1tf.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Tim Deegan <tim@xen.org>
Jan Beulich [Fri, 9 Apr 2021 07:19:18 +0000 (09:19 +0200)]
x86/PV32: avoid TLB flushing after mod_l3_entry()
32-bit guests may not depend upon the side effect of using ordinary
4-level paging when running on a 64-bit hypervisor. For L3 entry updates
to take effect, they have to use a CR3 reload. Therefore there's no need
to issue a paging structure invalidating TLB flush in this case.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Jan Beulich [Fri, 9 Apr 2021 07:18:51 +0000 (09:18 +0200)]
x86/PV: restrict TLB flushing after mod_l[234]_entry()
Just like we avoid to invoke remote root pt flushes when all uses of an
L4 table can be accounted for locally, the same can be done for all of
L[234] for the linear pt flush when the table is a "free floating" one,
i.e. it is pinned but not hooked up anywhere. While this situation
doesn't occur very often, it can be observed.
Since this breaks one of the implications of the XSA-286 fix, drop the
flush_root_pt_local variable again and set ->root_pgt_changed directly,
just like it was before that change.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Jan Beulich [Fri, 9 Apr 2021 07:18:17 +0000 (09:18 +0200)]
x86/PV: _PAGE_RW changes may take fast path of mod_l[234]_entry()
The only time _PAGE_RW matters when validating an L2 or higher entry is
when a linear page table is tried to be installed (see the comment ahead
of define_get_linear_pagetable()). Therefore when we disallow such at
build time, we can allow _PAGE_RW changes to take the fast paths there.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Jan Beulich [Fri, 9 Apr 2021 07:17:04 +0000 (09:17 +0200)]
x86: limit amount of INT3 in IND_THUNK_*
There's no point having every replacement variant to also specify the
INT3 - just have it once in the base macro. When patching, NOPs will get
inserted, which are fine to speculate through (until reaching the INT3).
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Jan Beulich [Fri, 9 Apr 2021 07:16:22 +0000 (09:16 +0200)]
x86: guard against straight-line speculation past RET
Under certain conditions CPUs can speculate into the instruction stream
past a RET instruction. Guard against this just like 3b7dab93f240
("x86/spec-ctrl: Protect against CALL/JMP straight-line speculation")
did - by inserting an "INT $3" insn. It's merely the mechanics of how to
achieve this that differ: A set of macros gets introduced to post-
process RET insns issued by the compiler (or living in assembly files).
Unfortunately for clang this requires further features their built-in
assembler doesn't support: We need to be able to override insn mnemonics
produced by the compiler (which may be impossible, if internally
assembly mnemonics never get generated).
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Jan Beulich [Fri, 9 Apr 2021 07:15:38 +0000 (09:15 +0200)]
x86/PV: make post-migration page state consistent
When a page table page gets de-validated, its type reference count drops
to zero (and PGT_validated gets cleared), but its type remains intact.
XEN_DOMCTL_getpageframeinfo3, therefore, so far reported prior usage for
such pages. An intermediate write to such a page via e.g.
MMU_NORMAL_PT_UPDATE, however, would transition the page's type to
PGT_writable_page, thus altering what XEN_DOMCTL_getpageframeinfo3 would
return. In libxc the decision which pages to normalize / localize
depends solely on the type returned from the domctl. As a result without
further precautions the guest won't be able to tell whether such a page
has had its (apparent) PTE entries transitioned to the new MFNs.
Add a check of PGT_validated, thus consistently avoiding normalization /
localization in the tool stack.
Also use XEN_DOMCTL_PFINFO_NOTAB in the variable's initializer instead
open coding it.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Jan Beulich [Fri, 9 Apr 2021 07:14:58 +0000 (09:14 +0200)]
libxg: don't use max policy in xc_cpuid_xend_policy()
using max undermines the separation between default and max. For
example, turning off AVX512F on an MPX-capable system silently turns on
MPX, despite this not being part of the default policy anymore. Since
the information is used only for determining what to convert 'x' to (but
not to e.g. validate '1' settings), the effect of this change is
identical for guests with (suitable) "cpuid=" settings to that of the
changes separating default from max and then converting (e.g.) MPX from
being part of default to only being part of max for guests without
(affected) "cpuid=" settings.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Jan Beulich [Fri, 9 Apr 2021 07:12:51 +0000 (09:12 +0200)]
x86: refine guest_mode()
The 2nd of the assertions as well as the macro's return value have been
assuming we're on the primary stack. While for most IST exceptions we
switch back to the main one when user mode was interrupted, for #DF we
intentionally never do, and hence a #DF actually triggering on a user
mode insn (which then is still a Xen bug) would in turn trigger this
assertion, rather than cleanly logging state.
Reported-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Julien Grall [Thu, 21 Jan 2021 11:12:00 +0000 (11:12 +0000)]
xen/page_alloc: Don't hold the heap_lock when clearing PGC_need_scrub
Currently, the heap_lock is held when clearing PGC_need_scrub in
alloc_heap_pages(). However, this is unnecessary because the only caller
(mark_page_offline()) that can concurrently modify the count_info is
using cmpxchg() in a loop.
Therefore, rework the code to avoid holding the heap_lock and use
test_and_clear_bit() instead.
Signed-off-by: Julien Grall <jgrall@amazon.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Jan Beulich [Wed, 7 Apr 2021 10:24:45 +0000 (12:24 +0200)]
fix for_each_cpu() again for NR_CPUS=1
Unfortunately aa50f45332f1 ("xen: fix for_each_cpu when NR_CPUS=1") has
caused quite a bit of fallout with gcc10, e.g. (there are at least two
more similar ones, and I didn't bother trying to find them all):
In file included from .../xen/include/xen/config.h:13,
from <command-line>:
core_parking.c: In function ‘core_parking_power’:
.../xen/include/asm/percpu.h:12:51: error: array subscript 1 is above array bounds of ‘long unsigned int[1]’ [-Werror=array-bounds]
12 | (*RELOC_HIDE(&per_cpu__##var, __per_cpu_offset[cpu]))
.../xen/include/xen/compiler.h:141:29: note: in definition of macro ‘RELOC_HIDE’
141 | (typeof(ptr)) (__ptr + (off)); })
| ^~~
core_parking.c:133:39: note: in expansion of macro ‘per_cpu’
133 | core_tmp = cpumask_weight(per_cpu(cpu_core_mask, cpu));
| ^~~~~~~
In file included from .../xen/include/xen/percpu.h:4,
from .../xen/include/asm/msr.h:7,
from .../xen/include/asm/time.h:5,
from .../xen/include/xen/time.h:76,
from .../xen/include/xen/spinlock.h:4,
from .../xen/include/xen/cpu.h:5,
from core_parking.c:19:
.../xen/include/asm/percpu.h:6:22: note: while referencing ‘__per_cpu_offset’
6 | extern unsigned long __per_cpu_offset[NR_CPUS];
| ^~~~~~~~~~~~~~~~
One of the further errors even went as far as claiming that an array
index (range) of [0, 0] was outside the bounds of a [1] array, so
something fishy is pretty clearly going on there.
The compiler apparently wants to be able to see that the loop isn't
really a loop in order to avoid triggering such warnings, yet what
exactly makes it consider the loop exit condition constant and within
the [0, 1] range isn't obvious - using ((mask)->bits[0] & 1) instead of
cpumask_test_cpu() for example did _not_ help.
Re-instate a special form of for_each_cpu(), experimentally "proven" to
avoid the diagnostics.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Dario Faggioli <dfaggioli@suse.com>