]> xenbits.xensource.com Git - xen.git/log
xen.git
6 years agox86/shadow: don't enable shadow mode with too small a shadow allocation
Jan Beulich [Fri, 1 Feb 2019 11:03:26 +0000 (12:03 +0100)]
x86/shadow: don't enable shadow mode with too small a shadow allocation

We've had more than one report of host crashes after failed migration,
and in at least one case we've had a hint towards a too far shrunk
shadow allocation pool. Instead of just checking the pool for being
empty, check whether the pool is smaller than what
shadow_set_allocation() would minimally bump it to if it was invoked in
the first place.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Tim Deegan <tim@xen.org>
master commit: 2634b997afabfdc5a972e07e536dfbc6febb4385
master date: 2018-11-30 12:10:39 +0100

6 years agons16550/PCI: fix skipping of devices
Jan Beulich [Fri, 1 Feb 2019 11:02:45 +0000 (12:02 +0100)]
ns16550/PCI: fix skipping of devices

Selecting between single/multiple BAR mode should happen after checking
whether to skip the present device, or else multi-BAR devices won't be
skipped correctly, due to port_idx getting set to zero in that case.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
master commit: c34fe0468acc61aca62422483c37a1309708f1cb
master date: 2018-11-30 12:07:33 +0100

6 years agox86/soft-reset: Drop gfn reference after calling get_gfn_query()
Andrew Cooper [Fri, 1 Feb 2019 11:02:15 +0000 (12:02 +0100)]
x86/soft-reset: Drop gfn reference after calling get_gfn_query()

get_gfn_query() internally takes the p2m lock, and this error path leaves it
locked.

This wasn't included in XSA-277 because the error path can only be triggered
by a carefully timed phymap operation concurrent with the domain being paused
and the toolstack issuing DOMCTL_soft_reset.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: e7969e917cef276318f722a607985a2e896aeb94
master date: 2018-11-22 17:58:46 +0000

6 years agox86/mem-sharing: Don't leave the altp2m lock held when nominating a page
Andrew Cooper [Fri, 1 Feb 2019 11:01:49 +0000 (12:01 +0100)]
x86/mem-sharing: Don't leave the altp2m lock held when nominating a page

get_gfn_type_access() internally takes the p2m lock, and nothing ever unlocks
it.  Switch to using the unlocked accessor instead.

This wasn't included in XSA-277 because neither mem-sharing nor altp2m are
supported.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Tamas K Lengyel <tamas@tklengyel.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: d6e02850d3b45c9658457214a749cc48097bdef4
master date: 2018-11-22 17:58:46 +0000

6 years agox86/HVM: __hvm_copy() should not write to p2m_ioreq_server pages
Jan Beulich [Fri, 1 Feb 2019 11:00:45 +0000 (12:00 +0100)]
x86/HVM: __hvm_copy() should not write to p2m_ioreq_server pages

Commit 3bdec530a5 ("x86/HVM: split page straddling emulated accesses in
more cases") introduced a hvm_copy_to_guest_linear() attempt before
falling back to hvmemul_linear_mmio_write(). This is wrong for the
p2m_ioreq_server special case. That change widened a pre-existing issue
though: Other writes to such pages also need to be failed (or forced
through emulation), in particular hypercall buffer writes.

Reported-by: Igor Druzhinin <igor.druzhinin@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: d7bff2bc003cd5fd8c618b70c62b8fcfd9cd187e
master date: 2018-11-15 16:42:25 +0100

6 years agoVMX: fix vmx_handle_eoi()
Jan Beulich [Fri, 1 Feb 2019 10:59:52 +0000 (11:59 +0100)]
VMX: fix vmx_handle_eoi()

In commit 303066fdb1e ("VMX: fix interaction of APIC-V and Viridian
emulation") I screwed up: Instead of clearing SVI, other ISR bits
should be taken into account.

Introduce a new helper set_svi(), split out of vmx_process_isr(), and
use it also from vmx_handle_eoi().

Following the problems in vmx_intr_assist() (see the still present big
block of debugging code there) also warn (once) if EOI'd vector and
original SVI don't match.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Chao Gao <chao.gao@intel.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 45cb9a4123b5550eb1f84846fe5482acae1c13a3
master date: 2018-11-02 12:15:33 +0100

6 years agoxen/arm: vgic-v3: Don't create empty re-distributor regions
Julien Grall [Mon, 28 Jan 2019 22:54:52 +0000 (14:54 -0800)]
xen/arm: vgic-v3: Don't create empty re-distributor regions

At the moment, Xen is assuming the hardware domain will have the same
number of re-distributor regions as the host. However, as the
number of CPUs or the stride (e.g on GICv4) may be different we end up
exposing regions which does not contain any re-distributors.

When booting, Linux will go through all the re-distributor region to
check whether a property (e.g vPLIs) is available accross all the
re-distributors. This will result to a data abort on empty regions
because there are no underlying re-distributor.

So we need to limit the number of regions exposed to the hardware
domain. The code reworked to only expose the minimun number of regions
required by the hardware domain. It is assumed the regions will be
populated starting from the first one.

Lastly, rename vgic_v3_rdist_count to reflect the value return by the
helper.

Reported-by: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
Signed-off-by: Julien Grall <julien.grall@arm.com>
Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
(cherry picked from commit 54ec59f6b0b363c34cf1864d5214a05e35ea75ee)

6 years agoxen/arm: vgic-v3: Delay the initialization of the domain information
Julien Grall [Mon, 1 Oct 2018 16:42:26 +0000 (17:42 +0100)]
xen/arm: vgic-v3: Delay the initialization of the domain information

A follow-up patch will require to know the number of vCPUs when
initializating the vGICv3 domain structure. However this information is
not available at domain creation. This is only known once
XEN_DOMCTL_max_vpus is called for that domain.

In order to get the max vCPUs around, delay the domain part of the vGIC
v3 initialization until the first vCPU of the domain is initialized.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
Acked-but-disliked-by: Stefano Stabellini <sstabellini@kernel.org>
(cherry picked from commit 703d9d5ec13a0f487e7415174ba54e0e3ca158db)

6 years agoxen/arm: check for multiboot nodes only under /chosen
Stefano Stabellini [Tue, 13 Nov 2018 16:45:49 +0000 (08:45 -0800)]
xen/arm: check for multiboot nodes only under /chosen

Make sure to only look for multiboot compatible nodes only under
/chosen, not under any other paths (depth <= 3).

Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
[julien: Use sizeof(path) instead of len ]
Reviewed-by: Julien Grall <julien.grall@arm.com>
(cherry picked from commit c32e3689c546305d4eae53e6ccf9c8b4e048c7df)

6 years agoxen/arm: gic: Ensure ordering between read of INTACK and shared data
Julien Grall [Tue, 23 Oct 2018 18:17:07 +0000 (19:17 +0100)]
xen/arm: gic: Ensure ordering between read of INTACK and shared data

When an IPI is generated by a CPU, the pattern looks roughly like:

  <write shared data>
  dsb(sy);
  <write to GIC to signal SGI>

On the receiving CPU we rely on the fact that, once we've taken the
interrupt, then the freshly written shared data must be visible to us.
Put another way, the CPU isn't going to speculate taking an interrupt.

Unfortunately, this assumption turns out to be broken.

Consider that CPUx wants to send an IPI to CPUy, which will cause CPUy
to read some shared_data. Before CPUx has done anything, a random
peripheral raises an IRQ to the GIC and the IRQ line on CPUy is raised.
CPUy then takes the IRQ and starts executing the entry code, heading
towards gic_handle_irq. Furthermore, let's assume that a bunch of the
previous interrupts handled by CPUy were SGIs, so the branch predictor
kicks in and speculates that irqnr will be <16 and we're likely to
head into handle_IPI. The prefetcher then grabs a speculative copy of
shared_data which contains a stale value.

Meanwhile, CPUx gets round to updating shared_data and asking the GIC
to send an SGI to CPUy. Internally, the GIC decides that the SGI is
more important than the peripheral interrupt (which hasn't yet been
ACKed) but doesn't need to do anything to CPUy, because the IRQ line
is already raised.

CPUy then reads the ACK register on the GIC, sees the SGI value which
confirms the branch prediction and we end up with a stale shared_data
value.

This patch fixes the problem by adding an smp_rmb() to the IPI entry
code in do_SGI.

At the same time document the write barrier.

Based on Linux commit f86c4fbd930ff6fecf3d8a1c313182bd0f49f496
"irqchip/gic: Ensure ordering between read of INTACK and shared data".

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Andrii Anisov<andrii_anisov@epam.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
(cherry picked from commit 555e5f1bd26c4c1995357e9671b3e42a68d5ce8f)

6 years agoxen/arm: gic: Ensure we have an ISB between ack and do_IRQ()
Julien Grall [Tue, 23 Oct 2018 18:17:06 +0000 (19:17 +0100)]
xen/arm: gic: Ensure we have an ISB between ack and do_IRQ()

Devices that expose their interrupt status registers via system
registers (e.g. Statistical profiling, CPU PMU, DynamIQ PMU, arch timer,
vgic (although unused by Linux), ...) rely on a context synchronising
operation on the CPU to ensure that the updated status register is
visible to the CPU when handling the interrupt. This usually happens as
a result of taking the IRQ exception in the first place, but there are
two race scenarios where this isn't the case.

For example, let's say we have two peripherals (X and Y), where Y uses a
system register for its interrupt status.

Case 1:
1. CPU takes an IRQ exception as a result of X raising an interrupt
2. Y then raises its interrupt line, but the update to its system
   register is not yet visible to the CPU
3. The GIC decides to expose Y's interrupt number first in the Ack
   register
4. The CPU runs the IRQ handler for Y, but the status register is stale

Case 2:
1. CPU takes an IRQ exception as a result of X raising an interrupt
2. CPU reads the interrupt number for X from the Ack register and runs
   its IRQ handler
3. Y raises its interrupt line and the Ack register is updated, but
   again, the update to its system register is not yet visible to the
   CPU.
4. Since the GIC drivers poll the Ack register, we read Y's interrupt
   number and run its handler without a context synchronisation
   operation, therefore seeing the stale register value.

In either case, we run the risk of missing an IRQ. This patch solves the
problem by ensuring that we execute an ISB in the GIC drivers prior
to invoking the interrupt handler.

Based on Linux commit 39a06b67c2c1256bcf2361a1f67d2529f70ab206
"irqchip/gic: Ensure we have an ISB between ack and ->handle_irq".

Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Andrii Anisov<andrii_anisov@epam.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
(cherry picked from commit 177afec4556c676e5a1a958d1626226fbca2a696)

6 years agoVMX: allow migration of guests with SSBD enabled
Jan Beulich [Fri, 23 Nov 2018 10:50:17 +0000 (11:50 +0100)]
VMX: allow migration of guests with SSBD enabled

The backport of cd53023df9 ("x86/msr: Virtualise MSR_SPEC_CTRL.SSBD for
guests to use") did not mirror the PV side change into the HVM (VMX-
specific) code path.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
6 years agox86/dom0: Fix shadowing of PV guests with 2M superpages
Andrew Cooper [Tue, 20 Nov 2018 14:52:13 +0000 (15:52 +0100)]
x86/dom0: Fix shadowing of PV guests with 2M superpages

This is a straight backport of c/s 28d9a9a2d41759b9e5163037b759ac557aea767c
but with a different justification.

Dom0 may have superpages (e.g. initial P2M), and may be shadowed
(e.g. PV-L1TF).  Because of this incorrect check, when PV superpages are
disallowed (which is the security supported configuration), attempting to
shadow the P2M with its superpages still intact will fail.  A #PF will be
handed back to the kernel, rather than the superpage being splintered and
shadowed.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Tim Deegan <tim@xen.org>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
6 years agox86/dom0: Avoid using 1G superpages if shadowing may be necessary
Andrew Cooper [Tue, 20 Nov 2018 14:51:36 +0000 (15:51 +0100)]
x86/dom0: Avoid using 1G superpages if shadowing may be necessary

The shadow code doesn't support 1G superpages, and will hand #PF[RSVD] back to
guests.

For dom0's with 512GB of RAM or more (and subject to the P2M alignment), Xen's
domain builder might use 1G superpages.

Avoid using 1G superpages (falling back to 2M superpages instead) if there is
a reasonable chance that we may have to shadow dom0.  This assumes that there
are no circumstances where we will activate logdirty mode on dom0.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 96f6ee15ad7ca96472779fc5c083b4149495c584
master date: 2018-11-12 11:26:04 +0000

6 years agox86/shadow: shrink struct page_info's shadow_flags to 16 bits
Jan Beulich [Tue, 20 Nov 2018 14:50:57 +0000 (15:50 +0100)]
x86/shadow: shrink struct page_info's shadow_flags to 16 bits

This is to avoid it overlapping the linear_pt_count field needed for PV
domains. Introduce a separate, HVM-only pagetable_dying field to replace
the sole one left in the upper 16 bits.

Note that the accesses to ->shadow_flags in shadow_{pro,de}mote() get
switched to non-atomic, non-bitops operations, as {test,set,clear}_bit()
are not allowed on uint16_t fields and hence their use would have
required ugly casts. This is fine because all updates of the field ought
to occur with the paging lock held, and other updates of it use |= and
&= as well (i.e. using atomic operations here didn't really guard
against potentially racing updates elsewhere).

This is part of XSA-280.

Reported-by: Prgmr.com Security <security@prgmr.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Tim Deegan <tim@xen.org>
master commit: 789589968ed90e82a832dbc60e958c76b787be7e
master date: 2018-11-20 14:59:54 +0100

6 years agox86/shadow: move OOS flag bit positions
Jan Beulich [Tue, 20 Nov 2018 14:50:13 +0000 (15:50 +0100)]
x86/shadow: move OOS flag bit positions

In preparation of reducing struct page_info's shadow_flags field to 16
bits, lower the bit positions used for SHF_out_of_sync and
SHF_oos_may_write.

Instead of also adjusting the open coded use in _get_page_type(),
introduce shadow_prepare_page_type_change() to contain knowledge of the
bit positions to shadow code.

This is part of XSA-280.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Tim Deegan <tim@xen.org>
master commit: d68e1070c3e8f4af7a31040f08bdd98e6d6eac1d
master date: 2018-11-20 14:59:13 +0100

6 years agox86/mm: Don't perform flush after failing to update a guests L1e
Andrew Cooper [Tue, 20 Nov 2018 14:49:39 +0000 (15:49 +0100)]
x86/mm: Don't perform flush after failing to update a guests L1e

If the L1e update hasn't occured, the flush cannot do anything useful.  This
skips the potentially expensive vcpumask_to_pcpumask() conversion, and
broadcast TLB shootdown.

More importantly however, we might be in the error path due to a bad va
parameter from the guest, and this should not propagate into the TLB flushing
logic.  The INVPCID instruction for example raises #GP for a non-canonical
address.

This is XSA-279.

Reported-by: Matthew Daley <mattd@bugfuzz.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 6c8d50288722672ecc8e19b0741a31b521d01706
master date: 2018-11-20 14:58:41 +0100

6 years agoAMD/IOMMU: suppress PTE merging after initial table creation
Jan Beulich [Tue, 20 Nov 2018 14:49:01 +0000 (15:49 +0100)]
AMD/IOMMU: suppress PTE merging after initial table creation

The logic is not fit for this purpose, so simply disable its use until
it can be fixed / replaced. Note that this re-enables merging for the
table creation case, which was disabled as a (perhaps unintended) side
effect of the earlier "amd/iommu: fix flush checks". It relies on no
page getting mapped more than once (with different properties) in this
process, as that would still be beyond what the merging logic can cope
with. But arch_iommu_populate_page_table() guarantees this afaict.

This is part of XSA-275.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
master commit: 937ef32565fa3a81fdb37b9dd5aa99a1b87afa75
master date: 2018-11-20 14:55:14 +0100

6 years agoamd/iommu: fix flush checks
Roger Pau Monné [Tue, 20 Nov 2018 14:48:22 +0000 (15:48 +0100)]
amd/iommu: fix flush checks

Flush checking for AMD IOMMU didn't check whether the previous entry
was present, or whether the flags (writable/readable) changed in order
to decide whether a flush should be executed.

Fix this by taking the writable/readable/next-level fields into account,
together with the present bit.

Along these lines the flushing in amd_iommu_map_page() must not be
omitted for PV domains. The comment there was simply wrong: Mappings may
very well change, both their addresses and their permissions. Ultimately
this should honor iommu_dont_flush_iotlb, but to achieve this
amd_iommu_ops first needs to gain an .iotlb_flush hook.

Also make clear_iommu_pte_present() static, to demonstrate there's no
caller omitting the (subsequent) flush.

This is part of XSA-275.

Reported-by: Paul Durrant <paul.durrant@citrix.com>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
master commit: 1a7ffe466cd057daaef245b0a1ab6b82588e4c01
master date: 2018-11-20 14:52:12 +0100

6 years agostubdom/vtpm: fix memcmp in TPM_ChangeAuthAsymFinish
Olaf Hering [Mon, 18 Jun 2018 12:55:36 +0000 (14:55 +0200)]
stubdom/vtpm: fix memcmp in TPM_ChangeAuthAsymFinish

gcc8 spotted this error:
error: 'memcmp' reading 20 bytes from a region of size 8 [-Werror=stringop-overflow=]

Signed-off-by: Olaf Hering <olaf@aepfle.de>
Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
(cherry picked from commit 22bf5be3237cb482a2ffd772ffd20ce37285eebf)
(cherry picked from commit dea9fc0e02d92f5e6d46680aa0a52fa758eca9c4)
(cherry picked from commit e907460fd61c350487ffee5d8aa375bef56bc81c)
Conflicts:
stubdom/Makefile
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
6 years agox86: work around HLE host lockup erratum
Jan Beulich [Wed, 7 Nov 2018 08:48:06 +0000 (09:48 +0100)]
x86: work around HLE host lockup erratum

XACQUIRE prefixed accesses to the 4Mb range of memory starting at 1Gb
are liable to lock up the processor. Disallow use of this memory range.

Unfortunately the available Core Gen7 and Gen8 spec updates are pretty
old, so I can only guess that they're similarly affected when Core Gen6
is and the Xeon counterparts are, too.

This is part of XSA-282.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: cc76410d20aff2cc07b268b0713dc1d2740c6e12
master date: 2018-11-07 09:33:24 +0100

6 years agox86: extend get_platform_badpages() interface
Jan Beulich [Wed, 7 Nov 2018 08:47:13 +0000 (09:47 +0100)]
x86: extend get_platform_badpages() interface

Use a structure so along with an address (now frame number) an order can
also be specified.

This is part of XSA-282.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 8617e69fb8307b372eeff41d55ec966dbeba36eb
master date: 2018-11-07 09:32:08 +0100

6 years agotools/dombuilder: Initialise vcpu debug registers correctly
Andrew Cooper [Mon, 5 Nov 2018 14:22:40 +0000 (15:22 +0100)]
tools/dombuilder: Initialise vcpu debug registers correctly

In particular, initialising %dr6 with the value 0 is buggy, because on
hardware supporting Transactional Memory, it will cause the sticky RTM bit to
be asserted, even though a debug exception from a transaction hasn't actually
been observed.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
master commit: 46029da12e5efeca6d957e5793bd34f2965fa0a1
master date: 2018-10-24 14:43:05 +0100

6 years agox86/domain: Initialise vcpu debug registers correctly
Andrew Cooper [Mon, 5 Nov 2018 14:22:07 +0000 (15:22 +0100)]
x86/domain: Initialise vcpu debug registers correctly

In particular, initialising %dr6 with the value 0 is buggy, because on
hardware supporting Transactional Memory, it will cause the sticky RTM bit to
be asserted, even though a debug exception from a transaction hasn't actually
been observed.

Introduce arch_vcpu_regs_init() to set various architectural defaults, and
reuse this in the hvm_vcpu_reset_state() path.

Architecturally, %edx's init state contains the processors model information,
and 0xf looks to be a remnant of the old Intel processors.  We clearly have no
software which cares, seeing as it is wrong for the last decade's worth of
Intel hardware and for all other vendors, so lets use the value 0 for
simplicity.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
x86/domain: Fix build with GCC 4.3.x

GCC 4.3.x can't initialise the user_regs structure like this.

Reported-by: Jan Beulich <JBeulich@suse.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
master commit: dfba4d2e91f63a8f40493c4fc2db03fd8287f6cb
master date: 2018-10-24 14:43:05 +0100
master commit: 0a1fa635029d100d4b6b7eddb31d49603217cab7
master date: 2018-10-30 13:26:21 +0000

6 years agox86/boot: Initialise the debug registers correctly
Andrew Cooper [Mon, 5 Nov 2018 14:21:19 +0000 (15:21 +0100)]
x86/boot: Initialise the debug registers correctly

In particular, initialising %dr6 with the value 0 is buggy, because on
hardware supporting Transactional Memory, it will cause the sticky RTM bit to
be asserted, even though a debug exception from a transaction hasn't actually
been observed.

Move X86_DR6_DEFAULT into x86-defns.h along with the other architectural
register constants, and introduce a new X86_DR7_DEFAULT.  Use the existing
write_debugreg() helper, rather than opencoded inline assembly.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
master commit: 721da6d41a70fe08b3fcd9c31a62f6709a54c6ba
master date: 2018-10-24 14:43:05 +0100

6 years agox86/boot: enable NMIs after traps init
Sergey Dyasli [Mon, 5 Nov 2018 14:20:45 +0000 (15:20 +0100)]
x86/boot: enable NMIs after traps init

In certain scenarios, NMIs might be disabled during Xen boot process.
Such situation will cause alternative_instructions() to:

    panic("Timed out waiting for alternatives self-NMI to hit\n");

This bug was originally seen when using Tboot to boot Xen 4.11

To prevent this from happening, enable NMIs during cpu_init() and
during __start_xen() for BSP.

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 072e054359a4d4a4f6c3fa09585667472c4f0f1d
master date: 2018-10-23 12:33:54 +0100

6 years agovtd: add missing check for shared EPT...
Paul Durrant [Mon, 5 Nov 2018 14:20:18 +0000 (15:20 +0100)]
vtd: add missing check for shared EPT...

...in intel_iommu_unmap_page().

This patch also includes some non-functional modifications in
intel_iommu_map_page().

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
master commit: e30c47cd8be8ba73cfc1ec7b1ebd036464708a24
master date: 2018-10-04 14:53:57 +0200

6 years agox86: fix "xpti=" and "pv-l1tf=" yet again
Jan Beulich [Mon, 5 Nov 2018 14:19:54 +0000 (15:19 +0100)]
x86: fix "xpti=" and "pv-l1tf=" yet again

While commit 2a3b34ec47 ("x86/spec-ctrl: Yet more fixes for xpti=
parsing") indeed fixed "xpti=dom0", it broke "xpti=no-dom0", in that
this then became equivalent to "xpti=no". In particular, the presence
of "xpti=" alone on the command line means nothing as to which default
is to be overridden; "xpti=no-dom0", for example, ought to have no
effect for DomU-s, as this is distinct from both "xpti=no-dom0,domu"
and "xpti=no-dom0,no-domu".

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 8743d2dea539617e237c77556a91dc357098a8af
master date: 2018-10-04 14:49:56 +0200

6 years agox86: split opt_pv_l1tf
Jan Beulich [Mon, 5 Nov 2018 14:18:54 +0000 (15:18 +0100)]
x86: split opt_pv_l1tf

Use separate tracking variables for the hardware domain and DomU-s.

No functional change intended, but adjust the comment in
init_speculation_mitigations() to match prior as well as resulting code.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 0b89643ef6ef14e2c2b731ca675d23e405ed69b1
master date: 2018-10-04 14:49:19 +0200

6 years agox86: split opt_xpti
Jan Beulich [Mon, 5 Nov 2018 14:18:25 +0000 (15:18 +0100)]
x86: split opt_xpti

Use separate tracking variables for the hardware domain and DomU-s.

No functional change intended.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 51e0cb45932d80d4eeb59994ee2c3f3c597b0212
master date: 2018-10-04 14:48:18 +0200

6 years agox86: silence false log messages for plain "xpti" / "pv-l1tf"
Jan Beulich [Mon, 5 Nov 2018 14:17:37 +0000 (15:17 +0100)]
x86: silence false log messages for plain "xpti" / "pv-l1tf"

While commit 2a3b34ec47 ("x86/spec-ctrl: Yet more fixes for xpti=
parsing")  claimed to have got rid of the 'parameter "xpti" has invalid
value "", rc=-22!' log message for "xpti" alone on the command line,
this wasn't the case (the option took effect nevertheless).

Fix this there as well as for plain "pv-l1tf".

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 2fb57e4beefeda923446b73f88b392e59b07d847
master date: 2018-09-28 17:12:14 +0200

6 years agox86/vvmx: Disallow the use of VT-x instructions when nested virt is disabled
Andrew Cooper [Wed, 10 Oct 2018 09:17:15 +0000 (09:17 +0000)]
x86/vvmx: Disallow the use of VT-x instructions when nested virt is disabled

c/s ac6a4500b "vvmx: set vmxon_region_pa of vcpu out of VMX operation to an
invalid address" was a real bugfix as described, but has a very subtle bug
which results in all VT-x instructions being usable by a guest.

The toolstack constructs a guest by issuing:

  XEN_DOMCTL_createdomain
  XEN_DOMCTL_max_vcpus

and optionally later, HVMOP_set_param to enable nested virt.

As a result, the call to nvmx_vcpu_initialise() in hvm_vcpu_initialise()
(which is what makes the above patch look correct during review) is actually
dead code.  In practice, nvmx_vcpu_initialise() first gets called when nested
virt is enabled, which is typically never.

As a result, the zeroed memory of struct vcpu causes nvmx_vcpu_in_vmx() to
return true before nested virt is enabled for the guest.

Fixing the order of initialisation is a work in progress for other reasons,
but not viable for security backports.

A compounding factor is that the vmexit handlers for all instructions, other
than VMXON, pass 0 into vmx_inst_check_privilege()'s vmxop_check parameter,
which skips the CR4.VMXE check.  (This is one of many reasons why nested virt
isn't a supported feature yet.)

However, the overall result is that when nested virt is not enabled by the
toolstack (i.e. the default configuration for all production guests), the VT-x
instructions (other than VMXON) are actually usable, and Xen very quickly
falls over the fact that the nvmx structure is uninitialised.

In order to fail safe in the supported case, re-implement all the VT-x
instruction handling using a single function with a common prologue, covering
all the checks which should cause #UD or #GP faults.  This deliberately
doesn't use any state from the nvmx structure, in case there are other lurking
issues.

This is XSA-278

Reported-by: Sergey Dyasli <sergey.dyasli@citrix.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Sergey Dyasli <sergey.dyasli@citrix.com>
6 years agostubdom/grub.patches: Drop docs changes, for licensing reasons
Ian Jackson [Tue, 18 Sep 2018 10:25:20 +0000 (11:25 +0100)]
stubdom/grub.patches: Drop docs changes, for licensing reasons

The patch file 00cvs is an import of a new upstream version of
grub1 from upstream CVS.

Unfortunately, in the period covered by the update, upstream changed
the documentation licence from a simple permissive licence, to the GNU
"Free Documentation Licence" with Front and Back Cover Texts.

The Debian Project is of the view that use the Front and Back Cover
Texts feature of the GFDL makes the resulting document not Free
Software, because of the mandatory redistribution of these immutable
texts.  (Personally, I agree.)

This is awkward because Debian do not want to ship non-free content.
So the Debian maintainers need to launder the upstream source code, to
remove the troublesome files.  This is an extra step when
incorporating new upstream versions.  It's particularly annoying for
security response, which often involves rebasing onto a new upstream
release.

grub1 is obsolete and the last change to Xen's PV grub1 stubdom code
was in 2016.  Furthermore, the grub1 documentation is not built and
installed by the Xen pv-grub stubdom Makefiles.

Therefore, remove all docs changes from stubdom/grub.patches.  This
means that there are now no longer any GFDL-licenced grub docs in
xen.git.

There is no user impact, and Debian is helped.  This change would
complicate any attempts to update to a new version of upstream grub1,
but it seems unlikely that such a thing will ever happen.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
CC: Doug Goldstein <cardoe@cardoe.com>
CC: Juergen Gross <jgross@suse.com>
CC: pkg-xen-devel@lists.alioth.debian.org
Acked-by: George Dunlap <george.dunlap@citrix.com>
Acked-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
(cherry picked from commit c62c53d61477dfeb63a47b0673c389082112babc)
(cherry picked from commit 94fba9f438a2c36ad9bf3a481a6013ddc7cf8cd9)
(cherry picked from commit ed024ef538cd10ec33c9edacd5e5f2016a5964d2)

6 years agotools/tests: fix an xs-test.c issue
Wei Liu [Mon, 20 Aug 2018 08:38:18 +0000 (09:38 +0100)]
tools/tests: fix an xs-test.c issue

The ret variable can be used uninitialised when iters is 0. Initialise
ret at the beginning to fix this issue.

Reported-by: Steven Haigh <netwiz@crc.id.au>
Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(cherry picked from commit 3a2b8525b883baa87fe89b3da58f5c09fa599b99)
(cherry picked from commit 33664f9a05401fac8f2c0be0bb7ee8a1851e4dcf)
(cherry picked from commit 788948bebcecca69bfac47e5514f2dc351dabad9)

6 years agox86/boot: Allocate one extra module slot for Xen image placement
Daniel Kiper [Mon, 8 Oct 2018 12:47:52 +0000 (14:47 +0200)]
x86/boot: Allocate one extra module slot for Xen image placement

Commit 9589927 (x86/mb2: avoid Xen image when looking for
module/crashkernel position) fixed relocation issues for
Multiboot2 protocol. Unfortunately it missed to allocate
module slot for Xen image placement in early boot path.
So, let's fix it right now.

Reported-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 4c5f9dbebc0bd2afee1ecd936c74ffe65756950f
master date: 2018-09-27 11:17:47 +0100

6 years agox86/hvm/emulate: make sure rep I/O emulation does not cross GFN boundaries
Paul Durrant [Mon, 8 Oct 2018 12:47:18 +0000 (14:47 +0200)]
x86/hvm/emulate: make sure rep I/O emulation does not cross GFN boundaries

When emulating a rep I/O operation it is possible that the ioreq will
describe a single operation that spans multiple GFNs. This is fine as long
as all those GFNs fall within an MMIO region covered by a single device
model, but unfortunately the higher levels of the emulation code do not
guarantee that. This is something that should almost certainly be fixed,
but in the meantime this patch makes sure that MMIO is truncated at GFN
boundaries and hence the appropriate device model is re-evaluated for each
target GFN.

NOTE: This patch does not deal with the case of a single MMIO operation
      spanning a GFN boundary. That is more complex to deal with and is
      deferred to a subsequent patch.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Convert calculations to be 32-bit only.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
master commit: 7626edeaca972e3e823535dcc44338f6b2f0b21f
master date: 2018-08-16 09:27:30 +0200

6 years agox86/efi: split compiler vs linker support
Roger Pau Monné [Mon, 8 Oct 2018 12:46:43 +0000 (14:46 +0200)]
x86/efi: split compiler vs linker support

So that an ELF binary with support for EFI services will be built when
the compiler supports the MS ABI, regardless of the linker support for
PE.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
Tested-by: Daniel Kiper <daniel.kiper@oracle.com>
master commit: 93249f7fc17c1f3a2aa8bf9ea055aa326e93a4ae
master date: 2018-07-31 10:25:06 +0200

6 years agox86/efi: move the logic to detect PE build support
Roger Pau Monné [Mon, 8 Oct 2018 12:46:08 +0000 (14:46 +0200)]
x86/efi: move the logic to detect PE build support

So that it can be used by other components apart from the efi specific
code. By moving the detection code creating a dummy efi/disabled file
can be avoided.

This is required so that the conditional used to define the efi symbol
in the linker script can be removed and instead the definition of the
efi symbol can be guarded using the preprocessor.

The motivation behind this change is to be able to build Xen using lld
(the LLVM linker), that at least on version 6.0.0 doesn't work
properly with a DEFINED being used in a conditional expression:

ld    -melf_x86_64_fbsd  -T xen.lds -N prelink.o --build-id=sha1 \
    /root/src/xen/xen/common/symbols-dummy.o -o /root/src/xen/xen/.xen-syms.0
ld: error: xen.lds:233: symbol not found: efi

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Tested-by: Daniel Kiper <daniel.kiper@oracle.com>
master commit: 18cd4997d26b9df95dda87503e41c823279a07a0
master date: 2018-07-31 10:24:22 +0200

6 years agox86/shutdown: use ACPI reboot method for Dell PowerEdge R540
Ross Lagerwall [Mon, 8 Oct 2018 12:45:28 +0000 (14:45 +0200)]
x86/shutdown: use ACPI reboot method for Dell PowerEdge R540

When EFI booting the Dell PowerEdge R540 it consistently wanders into
the weeds and gets an invalid opcode in the EFI ResetSystem call. This
is the same bug which affects the PowerEdge R740 so fix it in the same
way: quirk this hardware to use the ACPI reboot method instead.

BIOS Information
    Vendor: Dell Inc.
    Version: 1.3.7
    Release Date: 02/09/2018
System Information
    Manufacturer: Dell Inc.
    Product Name: PowerEdge R540

Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
master commit: 328ca55b7bd47e1324b75cce2a6c461308ecf93d
master date: 2018-06-28 09:29:13 +0200

6 years agox86/shutdown: use ACPI reboot method for Dell PowerEdge R740
Ross Lagerwall [Mon, 8 Oct 2018 12:44:36 +0000 (14:44 +0200)]
x86/shutdown: use ACPI reboot method for Dell PowerEdge R740

When EFI booting the Dell PowerEdge R740, it consistently wanders into the
weeds and gets an invalid opcode in the EFI ResetSystem call.
Quirk this hardware to use the ACPI reboot method instead.

Example stack trace:

----[ Xen-4.11-unstable  x86_64  debug=n   Not tainted ]----
CPU:    0
RIP:    e008:[<0000000000000017>] 0000000000000017
RFLAGS: 0000000000010202   CONTEXT: hypervisor
rax: 0000000066eb2ff0   rbx: ffff83005f627c20   rcx: 000000006c54e100
rdx: 0000000000000000   rsi: 0000000000000065   rdi: 000000107355f000
rbp: ffff83005f627c70   rsp: ffff83005f627b48   r8:  ffff83005f627b90
r9:  0000000000000000   r10: ffff83005f627c88   r11: 0000000000000000
r12: 0000000000000000   r13: 0000000000000cf9   r14: 0000000000000065
r15: ffff830000000000   cr0: 0000000080050033   cr4: 00000000003526e0
cr3: 000000107355f000   cr2: ffffc90000cff000
fsb: 0000000000000000   gsb: ffff88019f600000   gss: 0000000000000000
ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
Xen code around <0000000000000017> (0000000000000017):
 f0 d8 dd 00 f0 54 ff 00 <f0> 50 dd 00 f0 d8 dd 00 f0 a5 fe 00 f0 87 e9 00
Xen stack trace from rsp=ffff83005f627b48:
   ffff83005f627b50 ffffffffffffffda 000000006c547aaa ffff82d000000001
   ffff83005f627bec 000000107355f000 000000006c546fb8 ffff83107ffe3240
   0000000000000000 0000000000000000 8000000000000002 0000000000000000
   000000006c546b95 000000006c54c700 ffff83005f627bdc ffff83005f627be8
   000000005f616000 ffff83005f627c20 0000000000000000 0000000000000cf9
   ffff820080350001 000000000000000b ffff82d080351eda 0000000000000000
   0000000000000000 0000000000000000 0000000000000000 000000005f616000
   0000000000000000 ffff82d08095ff60 ffff82d08095ff60 000000f100000000
   ffff82d080296097 000000000000e008 0000000000000000 ffff83005f627c88
   0000000000000000 00000000fffffffe ffff82d0802959d2 ffff82d0802959d2
   000000008095f300 000000005f627c9c 00000000000000f8 0000000000000000
   00000000000000f8 ffff82d080932c00 0000000000000000 ffff82d08095f7c8
   ffff82d080932c00 0000000000000000 0000000000000000 ffff82d080295a9b
   ffff83005f627d98 ffff82d0802361f3 ffff82d080932c00 0000000080000000
   ffff83005f627d98 ffff82d080279a19 ffff82d08095f02c ffff82d080000000
   0000000000000000 00000000000000fb 0000000000000000 00000071484e54f6
   ffff831073542098 ffff82d08093ac78 ffff831072befd30 0000000000000000
   0000000000000000 0000000000000000 0000000000000000 0000000000000000
   0000000000000000 ffff82d08034f185 ffff82d080949460 0000000000000000
   ffff82d08095f270 0000000000000008 ffff83107357ae20 0000007146ce4bd3
Xen call trace:
   [<0000000000000017>] 0000000000000017
   [<ffff82d080351eda>] efi_reset_system+0x5a/0x90
   [<ffff82d080296097>] smp_send_stop+0x97/0xa0
   [<ffff82d0802959d2>] machine_restart+0x212/0x2d0
   [<ffff82d0802959d2>] machine_restart+0x212/0x2d0
   [<ffff82d080295a9b>] shutdown.c#__machine_restart+0xb/0x10
   [<ffff82d0802361f3>] smp_call_function_interrupt+0x53/0x80
   [<ffff82d080279a19>] do_IRQ+0x259/0x660
   [<ffff82d08034f185>] common_interrupt+0x85/0x90
   [<ffff82d0802c6152>] mwait-idle.c#mwait_idle+0x242/0x390
   [<ffff82d08026b446>] domain.c#idle_loop+0x86/0xc0

****************************************
Panic on CPU 0:
FATAL TRAP: vector = 6 (invalid opcode)
****************************************

dmidecode info:

BIOS Information:
    Vendor: Dell Inc.
    Version: 1.2.11
    Release Date: 10/19/2017
    BIOS Revision: 1.2
System Information:
    Manufacturer: Dell Inc.
    Product Name: PowerEdge R740

Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
master commit: f97f774b5aa6b471d1fed1c451c89ec7457dadf2
master date: 2018-01-24 18:01:00 +0100

6 years agoupdate Xen version to 4.9.4-pre
Jan Beulich [Mon, 8 Oct 2018 12:44:06 +0000 (14:44 +0200)]
update Xen version to 4.9.4-pre

6 years agoupdate Xen version to 4.9.3 RELEASE-4.9.3
Jan Beulich [Tue, 25 Sep 2018 14:04:02 +0000 (16:04 +0200)]
update Xen version to 4.9.3

6 years agox86: assorted array_index_nospec() insertions
Jan Beulich [Fri, 14 Sep 2018 11:29:43 +0000 (13:29 +0200)]
x86: assorted array_index_nospec() insertions

Don't chance having Spectre v1 (including BCBS) gadgets. In some of the
cases the insertions are more of precautionary nature rather than there
provably being a gadget, but I think we should err on the safe (secure)
side here.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
Acked-by: Razvan Cojocaru <rcojocaru@bitdefender.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 3f2002614af51dfd507168a1696658bac91155ce
master date: 2018-09-03 17:50:10 +0200

6 years agoVT-d/dmar: iommu mem leak fix
Zhenzhong Duan [Fri, 14 Sep 2018 11:29:12 +0000 (13:29 +0200)]
VT-d/dmar: iommu mem leak fix

Release memory allocated for drhd iommu in error path.

Signed-off-by: Zhenzhong Duan <zhenzhong.duan@oracle.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
master commit: fd07b6648c4c8891dca5bd0f7ef174b6831f80b2
master date: 2018-08-27 11:37:24 +0200

6 years agorangeset: make inquiry functions tolerate NULL inputs
Jan Beulich [Fri, 14 Sep 2018 11:28:40 +0000 (13:28 +0200)]
rangeset: make inquiry functions tolerate NULL inputs

Rather than special casing the ->iomem_caps check in x86's
get_page_from_l1e() for the dom_xen case, let's be more tolerant in
general, along the lines of rangeset_is_empty(): A never allocated
rangeset can't possibly contain or overlap any range.

Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
master commit: ad0a9f273d6d6f0545cd9b708b2d4be581a6cadd
master date: 2018-08-17 13:54:40 +0200

6 years agox86/setup: Avoid OoB E820 lookup when calculating the L1TF safe address
Andrew Cooper [Fri, 14 Sep 2018 11:28:12 +0000 (13:28 +0200)]
x86/setup: Avoid OoB E820 lookup when calculating the L1TF safe address

A number of corner cases (most obviously, no-real-mode and no Multiboot memory
map) can end up with e820_raw.nr_map being 0, at which point the L1TF
calculation will underflow.

Spotted by Coverity.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
master commit: 3e4ec07e14bce81f6ae22c31ff1302d1f297a226
master date: 2018-08-16 18:10:07 +0100

6 years agox86/hvm/ioreq: MMIO range checking completely ignores direction flag
Paul Durrant [Fri, 14 Sep 2018 11:27:45 +0000 (13:27 +0200)]
x86/hvm/ioreq: MMIO range checking completely ignores direction flag

hvm_select_ioreq_server() is used to route an ioreq to the appropriate
ioreq server. For MMIO this is done by comparing the range of the ioreq
to the ranges registered by the device models of each ioreq server.
Unfortunately the calculation of the range if the ioreq completely ignores
the direction flag and thus may calculate the wrong range for comparison.
Thus the ioreq may either be routed to the wrong server or erroneously
terminated by null_ops.

NOTE: The patch also fixes whitespace in the switch statement to make it
      style compliant.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 60a56dc0064a00830663ffe48215dcd080cb9504
master date: 2018-08-15 14:14:06 +0200

6 years agox86/vlapic: Bugfixes and improvements to vlapic_{read,write}()
Andrew Cooper [Fri, 14 Sep 2018 11:27:17 +0000 (13:27 +0200)]
x86/vlapic: Bugfixes and improvements to vlapic_{read,write}()

Firstly, there is no 'offset' boundary check on the non-32-bit write path
before the call to vlapic_read_aligned(), which allows an attacker to read
beyond the end of vlapic->regs->data[], which is only 1024 bytes long.

However, as the backing memory is a domheap page, and misaligned accesses get
chunked down to single bytes across page boundaries, I can't spot any
XSA-worthy problems which occur from the overrun.

On real hardware, bad accesses don't instantly crash the machine.  Their
behaviour is undefined, but the domain_crash() prohibits sensible testing.
Behave more like other x86 MMIO and terminate bad accesses with appropriate
defaults.

While making these changes, clean up and simplify the the smaller-access
handling.  In particular, avoid pointer based mechansims for 1/2-byte reads so
as to avoid forcing the value to be spilled to the stack.

  add/remove: 0/0 grow/shrink: 0/2 up/down: 0/-175 (-175)
  function                                     old     new   delta
  vlapic_read                                  211     142     -69
  vlapic_write                                 304     198    -106

Finally, there are a plethora of read/write functions in the vlapic namespace,
so rename these to vlapic_mmio_{read,write}() to make their purpose more
clear.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
master commit: b6f43c14cef3af8477a9eca4efab87dd150a2885
master date: 2018-08-10 13:27:24 +0100

6 years agox86/vmx: Avoid hitting BUG_ON() after EPTP-related domain_crash()
Andrew Cooper [Fri, 14 Sep 2018 11:26:38 +0000 (13:26 +0200)]
x86/vmx: Avoid hitting BUG_ON() after EPTP-related domain_crash()

If the EPTP pointer can't be located in the altp2m list, the domain
is (legitimately) crashed.

Under those circumstances, execution will continue and guarentee to hit the
BUG_ON(idx >= MAX_ALTP2M) (unfortunately, just out of context).

Return from vmx_vmexit_handler() after the domain_crash(), which also has the
side effect of reentering the scheduler more promptly.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Razvan Cojocaru <rcojocaru@bitdefender.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
master commit: 48dbb2dbe9d9f92a2890a15bb48a0598c065b9f8
master date: 2018-08-02 10:10:43 +0100

6 years agolibxl: start pvqemu when 9pfs is requested
Stefano Stabellini [Tue, 14 Aug 2018 22:13:09 +0000 (15:13 -0700)]
libxl: start pvqemu when 9pfs is requested

PV 9pfs requires the PV backend in QEMU. Make sure that libxl knows it.

Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
(cherry picked from commit 47bc2c29b5a875e5f4abd36f2cb9faa594299f6c)

6 years agox86: write to correct variable in parse_pv_l1tf()
Jan Beulich [Wed, 15 Aug 2018 12:23:12 +0000 (14:23 +0200)]
x86: write to correct variable in parse_pv_l1tf()

Apparently a copy-and-paste mistake.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 57c554f8a6e06894f601d977d18b3017d2a60f40
master date: 2018-08-15 14:15:30 +0200

6 years agoxl.conf: Add global affinity masks
Wei Liu [Tue, 7 Aug 2018 14:35:34 +0000 (15:35 +0100)]
xl.conf: Add global affinity masks

XSA-273 involves one hyperthread being able to use Spectre-like
techniques to "spy" on another thread.  The details are somewhat
complicated, but the upshot is that after all Xen-based mitigations
have been applied:

* PV guests cannot spy on sibling threads
* HVM guests can spy on sibling threads

(NB that for purposes of this vulnerability, PVH and HVM guests are
identical.  Whenever this comment refers to 'HVM', this includes PVH.)

There are many possible mitigations to this, including disabling
hyperthreading entirely.  But another solution would be:

* Specify some cores as PV-only, others as PV or HVM
* Allow HVM guests to only run on thread 0 of the "HVM-or-PV" cores
* Allow PV guests to run on the above cores, as well as any thread of the PV-only cores.

For example, suppose you had 16 threads across 8 cores (0-7).  You
could specify 0-3 as PV-only, and 4-7 as HVM-or-PV.  Then you'd set
the affinity of the HVM guests as follows (binary representation):

0000000010101010

And the affinity of the PV guests as follows:

1111111110101010

In order to make this easy, this patches introduces three "global affinity
masks", placed in xl.conf:

    vm.cpumask
    vm.hvm.cpumask
    vm.pv.cpumask

These are parsed just like the 'cpus' and 'cpus_soft' options in the
per-domain xl configuration files.  The resulting mask is AND-ed with
whatever mask results at the end of the xl configuration file.
`vm.cpumask` would be applied to all guest types, `vm.hvm.cpumask`
would be applied to HVM and PVH guest types, and `vm.pv.cpumask`
would be applied to PV guest types.

The idea would be that to implement the above mask across all your
VMs, you'd simply add the following two lines to the configuration
file:

    vm.hvm.cpumask=8,10,12,14
    vm.pv.cpumask=0-8,10,12,14

See xl.conf manpage for details.

This is part of XSA-273 / CVE-2018-3646.

Signed-off-by: George Dunlap <george.dunlap@citrix.com>
Signed-off-by: Wei Liu <wei.liu2@citrix.com>
(cherry picked from commit aa67b97ed34279c43a43d9ca46727b5746caa92e)

PVH guest type in toolstack is not available in this version of Xen.
Change code and manpage to cope.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
6 years agox86: Make "spec-ctrl=no" a global disable of all mitigations
Jan Beulich [Mon, 13 Aug 2018 11:07:23 +0000 (05:07 -0600)]
x86: Make "spec-ctrl=no" a global disable of all mitigations

In order to have a simple and easy to remember means to suppress all the
more or less recent workarounds for hardware vulnerabilities, force
settings not controlled by "spec-ctrl=" also to their original defaults,
unless they've been forced to specific values already by earlier command
line options.

This is part of XSA-273.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
(cherry picked from commit d8800a82c3840b06b17672eddee4878bbfdacc6d)

6 years agox86/spec-ctrl: Introduce an option to control L1D_FLUSH for HVM HAP guests
Andrew Cooper [Tue, 29 May 2018 17:44:16 +0000 (18:44 +0100)]
x86/spec-ctrl: Introduce an option to control L1D_FLUSH for HVM HAP guests

This mitigation requires up-to-date microcode, and is enabled by default on
affected hardware if available, and is used for HVM guests

The default for SMT/Hyperthreading is far more complicated to reason about,
not least because we don't know if the user is going to want to run any HVM
guests to begin with.  If a explicit default isn't given, nag the user to
perform a risk assessment and choose an explicit default, and leave other
configuration to the toolstack.

This is part of XSA-273 / CVE-2018-3620.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit 3bd36952dab60290f33d6791070b57920e10754b)

6 years agox86/msr: Virtualise MSR_FLUSH_CMD for guests
Andrew Cooper [Fri, 13 Apr 2018 15:34:01 +0000 (15:34 +0000)]
x86/msr: Virtualise MSR_FLUSH_CMD for guests

Guests (outside of the nested virt case, which isn't supported yet) don't need
L1D_FLUSH for their L1TF mitigations, but offering/emulating MSR_FLUSH_CMD is
easy and doesn't pose an issue for Xen.

The MSR is offered to HVM guests only.  PV guests attempting to use it would
trap for emulation, and the L1D cache would fill long before the return to
guest context.  As such, PV guests can't make any use of the L1D_FLUSH
functionality.

This is part of XSA-273 / CVE-2018-3646.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit fd9823faf9df057a69a9a53c2e100691d3f4267c)

6 years agox86/spec-ctrl: CPUID/MSR definitions for L1D_FLUSH
Andrew Cooper [Wed, 28 Mar 2018 14:21:39 +0000 (15:21 +0100)]
x86/spec-ctrl: CPUID/MSR definitions for L1D_FLUSH

This is part of XSA-273 / CVE-2018-3646.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit 3563fc2b2731a63fd7e8372ab0f5cef205bf8477)

6 years agox86/pv: Force a guest into shadow mode when it writes an L1TF-vulnerable PTE
Juergen Gross [Mon, 23 Jul 2018 06:11:40 +0000 (08:11 +0200)]
x86/pv: Force a guest into shadow mode when it writes an L1TF-vulnerable PTE

See the comment in shadow.h for an explanation of L1TF and the safety
consideration of the PTEs.

In the case that CONFIG_SHADOW_PAGING isn't compiled in, crash the domain
instead.  This allows well-behaved PV guests to function, while preventing
L1TF from being exploited.  (Note: PV guest kernels which haven't been updated
with L1TF mitigations will likely be crashed as soon as they try paging a
piece of userspace out to disk.)

This is part of XSA-273 / CVE-2018-3620.

Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Tim Deegan <tim@xen.org>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit 06e8b622d3f3c0fa5075e91b041c6f45549ad70a)

6 years agox86/mm: Plumbing to allow any PTE update to fail with -ERESTART
Andrew Cooper [Mon, 23 Jul 2018 06:11:40 +0000 (08:11 +0200)]
x86/mm: Plumbing to allow any PTE update to fail with -ERESTART

Switching to shadow mode is performed in tasklet context.  To facilitate this,
we schedule the tasklet, then create a hypercall continuation to allow the
switch to take place.

As a consequence, the x86 mm code needs to cope with an L1e operation being
continuable.  do_mmu{,ext}_op() may no longer assert that a continuation
doesn't happen on the final iteration.

To handle the arguments correctly on continuation, compat_update_va_mapping*()
may no longer call into their non-compat counterparts.  Move the compat
functions into mm.c rather than exporting __do_update_va_mapping() and
{get,put}_pg_owner(), and fix an unsigned long/int inconsistency with
compat_update_va_mapping_otherdomain().

This is part of XSA-273 / CVE-2018-3620.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit c612481d1c9232c6abf91b03ec655e92f808805f)

6 years agox86/shadow: Infrastructure to force a PV guest into shadow mode
Juergen Gross [Mon, 23 Jul 2018 06:11:40 +0000 (07:11 +0100)]
x86/shadow: Infrastructure to force a PV guest into shadow mode

To mitigate L1TF, we cannot alter an architecturally-legitimate PTE a PV guest
chooses to write, but we can force the PV domain into shadow mode so Xen
controls the PTEs which are reachable by the CPU pagewalk.

Introduce new shadow mode, PG_SH_forced, and a tasklet to perform the
transition.  Later patches will introduce the logic to enable this mode at the
appropriate time.

To simplify vcpu cleanup, make tasklet_kill() idempotent with respect to
tasklet_init(), which involves adding a helper to check for an uninitialised
list head.

This is part of XSA-273 / CVE-2018-3620.

Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Tim Deegan <tim@xen.org>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit b76ec3946bf6caca2c3950b857c008bc8db6723f)

6 years agox86/spec-ctrl: Introduce an option to control L1TF mitigation for PV guests
Andrew Cooper [Mon, 23 Jul 2018 13:46:10 +0000 (13:46 +0000)]
x86/spec-ctrl: Introduce an option to control L1TF mitigation for PV guests

Shadowing a PV guest is only available when shadow paging is compiled in.
When shadow paging isn't available, guests can be crashed instead as
mitigation from Xen's point of view.

Ideally, dom0 would also be potentially-shadowed-by-default, but dom0 has
never been shadowed before, and there are some stability issues under
investigation.

This is part of XSA-273 / CVE-2018-3620.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit 66a4e986819a86ba66ca2fe9d925e62a4fd30114)

6 years agox86/spec-ctrl: Calculate safe PTE addresses for L1TF mitigations
Andrew Cooper [Wed, 25 Jul 2018 12:10:19 +0000 (12:10 +0000)]
x86/spec-ctrl: Calculate safe PTE addresses for L1TF mitigations

Safe PTE addresses for L1TF mitigations are ones which are within the L1D
address width (may be wider than reported in CPUID), and above the highest
cacheable RAM/NVDIMM/BAR/etc.

All logic here is best-effort heuristics, which should in practice be fine for
most hardware.  Future work will see about disentangling the SRAT handling
further, as well as having L0 pass this information down to lower levels when
virtualised.

This is part of XSA-273 / CVE-2018-3620.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit b03a57c9383b32181e60add6b6de12b473652aa4)

6 years agotools/oxenstored: Make evaluation order explicit
Christian Lindig [Mon, 13 Aug 2018 16:26:56 +0000 (17:26 +0100)]
tools/oxenstored: Make evaluation order explicit

In Store.path_write(), Path.apply_modify() updates the node_created
reference and both the value of apply_modify() and node_created are
returned by path_write().

At least with OCaml 4.06.1 this leads to the value of node_created being
returned *before* it is updated by apply_modify().  This in turn leads
to the quota for a domain not being updated in Store.write().  Hence, a
guest can create an unlimited number of entries in xenstore.

The fix is to make evaluation order explicit.

This is XSA-272.

Signed-off-by: Christian Lindig <christian.lindig@citrix.com>
Reviewed-by: Rob Hoes <rob.hoes@citrix.com>
(cherry picked from commit 73392c7fd14c59f8c96e0b2eeeb329e4ae9086b6)

6 years agox86/vtx: Fix the checking for unknown/invalid MSR_DEBUGCTL bits
Andrew Cooper [Mon, 18 Jun 2018 08:12:39 +0000 (16:12 +0800)]
x86/vtx: Fix the checking for unknown/invalid MSR_DEBUGCTL bits

The VPMU_MODE_OFF early-exit in vpmu_do_wrmsr() introduced by c/s
11fe998e56 bypasses all reserved bit checking in the general case.  As a
result, a guest can enable BTS when it shouldn't be permitted to, and
lock up the entire host.

With vPMU active (not a security supported configuration, but useful for
debugging), the reserved bit checking in broken, caused by the original
BTS changeset 1a8aa75ed.

From a correctness standpoint, it is not possible to have two different
pieces of code responsible for different parts of value checking, if
there isn't an accumulation of bits which have been checked.  A
practical upshot of this is that a guest can set any value it
wishes (usually resulting in a vmentry failure for bad guest state).

Therefore, fix this by implementing all the reserved bit checking in the
main MSR_DEBUGCTL block, and removing all handling of DEBUGCTL from the
vPMU MSR logic.

This is XSA-269.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit 2a8a8e99feb950504559196521bc9fd63ed3a962)

6 years agoARM: disable grant table v2
Stefano Stabellini [Tue, 14 Aug 2018 10:20:53 +0000 (11:20 +0100)]
ARM: disable grant table v2

It was never expected to work, the implementation is incomplete.

As a side effect, it also prevents guests from triggering a
"BUG_ON(page_get_owner(pg) != d)" in gnttab_unpopulate_status_frames().

This is XSA-268.

Signed-off-by: Stefano Stabellini <sstabellini@kernel.org>
Acked-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit 9a5c16a3e75778c8a094ca87784d93b74676f46c)

6 years agocommon/gnttab: Introduce command line feature controls
Andrew Cooper [Tue, 14 Aug 2018 10:20:53 +0000 (11:20 +0100)]
common/gnttab: Introduce command line feature controls

This patch was originally released as part of XSA-226.  It retains the same
command line syntax (as various downstreams are mitigating XSA-226 using this
mechanism) but the defaults have been updated due to the revised XSA-226
patched, after which transitive grants are believed to functioning
properly.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit dc96c65ed6d7ffd4c95487373df708d97443cf77)

6 years agoVMX: fix vmx_{find,del}_msr() build
Jan Beulich [Thu, 19 Jul 2018 09:54:45 +0000 (11:54 +0200)]
VMX: fix vmx_{find,del}_msr() build

Older gcc at -O2 (and perhaps higher) does not recognize that apparently
uninitialized variables aren't really uninitialized. Pull out the
assignments used by two of the three case blocks and make them
initializers of the variables, as I think I had suggested during review.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
(cherry picked from commit 97cb0516a322ecdf0032fa9d8aa1525c03d7772f)

6 years agox86/vmx: Support load-only guest MSR list entries
Andrew Cooper [Mon, 7 May 2018 10:57:00 +0000 (11:57 +0100)]
x86/vmx: Support load-only guest MSR list entries

Currently, the VMX_MSR_GUEST type maintains completely symmetric guest load
and save lists, by pointing VM_EXIT_MSR_STORE_ADDR and VM_ENTRY_MSR_LOAD_ADDR
at the same page, and setting VM_EXIT_MSR_STORE_COUNT and
VM_ENTRY_MSR_LOAD_COUNT to the same value.

However, for MSRs which we won't let the guest have direct access to, having
hardware save the current value on VMExit is unnecessary overhead.

To avoid this overhead, we must make the load and save lists asymmetric.  By
making the entry load count greater than the exit store count, we can maintain
two adjacent lists of MSRs, the first of which is saved and restored, and the
second of which is only restored on VMEntry.

For simplicity:
 * Both adjacent lists are still sorted by MSR index.
 * It undefined behaviour to insert the same MSR into both lists.
 * The total size of both lists is still limited at 256 entries (one 4k page).

Split the current msr_count field into msr_{load,save}_count, and introduce a
new VMX_MSR_GUEST_LOADONLY type, and update vmx_{add,find}_msr() to calculate
which sublist to search, based on type.  VMX_MSR_HOST has no logical sublist,
whereas VMX_MSR_GUEST has a sublist between 0 and the save count, while
VMX_MSR_GUEST_LOADONLY has a sublist between the save count and the load
count.

One subtle point is that inserting an MSR into the load-save list involves
moving the entire load-only list, and updating both counts.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
(cherry picked from commit 1ac46b55632626aeb935726e1b0a71605ef6763a)

6 years agox86/vmx: Pass an MSR value into vmx_msr_add()
Andrew Cooper [Mon, 7 May 2018 10:57:00 +0000 (11:57 +0100)]
x86/vmx: Pass an MSR value into vmx_msr_add()

The main purpose of this change is to allow us to set a specific MSR value,
without needing to know whether there is already a load/save list slot for it.

Previously, callers wanting this property needed to call both vmx_add_*_msr()
and vmx_write_*_msr() to cover both cases, and there are no callers which want
the old behaviour of being a no-op if an entry already existed for the MSR.

As a result of this API improvement, the default value for guest MSRs need not
be 0, and the default for host MSRs need not be passed via hardware register.
In practice, this cleans up the VPMU allocation logic, and avoids an MSR read
as part of vcpu construction.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit ee7689b94ac7094b975ab4a023cfeae209da0a36)

6 years agox86/vmx: Improvements to LBR MSR handling
Andrew Cooper [Mon, 7 May 2018 10:57:00 +0000 (11:57 +0100)]
x86/vmx: Improvements to LBR MSR handling

The main purpose of this patch is to only ever insert the LBR MSRs into the
guest load/save list once, as a future patch wants to change the behaviour of
vmx_add_guest_msr().

The repeated processing of lbr_info and the guests MSR load/save list is
redundant, and a guest using LBR itself will have to re-enable
MSR_DEBUGCTL.LBR in its #DB handler, meaning that Xen will repeat this
redundant processing every time the guest gets a debug exception.

Rename lbr_fixup_enabled to lbr_flags to be a little more generic, and use one
bit to indicate that the MSRs have been inserted into the load/save list.
Shorten the existing FIXUP* identifiers to reduce code volume.

Furthermore, handing the guest #MC on an error isn't a legitimate action.  Two
of the three failure cases are definitely hypervisor bugs, and the third is a
boundary case which shouldn't occur in practice.  The guest also won't execute
correctly, so handle errors by cleanly crashing the guest.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit be73a842e642772d7372004c9c105de35b771020)

6 years agox86/vmx: Support remote access to the MSR lists
Andrew Cooper [Mon, 7 May 2018 10:57:00 +0000 (11:57 +0100)]
x86/vmx: Support remote access to the MSR lists

At the moment, all modifications of the MSR lists are in current context.
However, future changes may need to put MSR_EFER into the lists from domctl
hypercall context.

Plumb a struct vcpu parameter down through the infrastructure, and use
vmx_vmcs_{enter,exit}() for safe access to the VMCS in vmx_add_msr().  Use
assertions to ensure that access is either in current context, or while the
vcpu is paused.

Note these expectations beside the fields in arch_vmx_struct, and reorder the
fields to avoid unnecessary padding.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit 80599f0b770199116aa753bfdfac9bfe2e8ea86a)

6 years agox86/vmx: Factor locate_msr_entry() out of vmx_find_msr() and vmx_add_msr()
Andrew Cooper [Mon, 7 May 2018 10:57:00 +0000 (11:57 +0100)]
x86/vmx: Factor locate_msr_entry() out of vmx_find_msr() and vmx_add_msr()

Instead of having multiple algorithms searching the MSR lists, implement a
single one.  It has the semantics required by vmx_add_msr(), to identify the
position in which an MSR should live, if it isn't already present.

There will be a marginal improvement for vmx_find_msr() by avoiding the
function pointer calls to vmx_msr_entry_key_cmp(), and a major improvement for
vmx_add_msr() by using a binary search instead of a linear search.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
(cherry picked from commit 4d94828cf11104256dccea1fa7762f00575dfaa0)

6 years agox86/vmx: Internal cleanup for MSR load/save infrastructure
Andrew Cooper [Mon, 7 May 2018 10:57:00 +0000 (11:57 +0100)]
x86/vmx: Internal cleanup for MSR load/save infrastructure

 * Use an arch_vmx_struct local variable to reduce later code volume.
 * Use start/total instead of msr_area/msr_count.  This is in preparation for
   more finegrained handling with later changes.
 * Use ent/end pointers (again for preparation), and to make the vmx_add_msr()
   logic easier to follow.
 * Make the memory allocation block of vmx_add_msr() unlikely, and calculate
   virt_to_maddr() just once.

No practical change to functionality.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
(cherry picked from commit 94fda356fcdcc847662a4c9f6cc63511f25c1247)

6 years agox86/vmx: API improvements for MSR load/save infrastructure
Andrew Cooper [Mon, 7 May 2018 10:57:00 +0000 (11:57 +0100)]
x86/vmx: API improvements for MSR load/save infrastructure

Collect together related infrastructure in vmcs.h, rather than having it
spread out.  Turn vmx_{read,write}_guest_msr() into static inlines, as they
are simple enough.

Replace 'int type' with 'enum vmx_msr_list_type', and use switch statements
internally.  Later changes are going to introduce a new type.

Rename the type identifiers for consistency with the other VMX_MSR_*
constants.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
(cherry picked from commit f54b63e8617ada823be43d60467a43c8224b7909)

6 years agox86/vmx: Defer vmx_vmcs_exit() as long as possible in construct_vmcs()
Andrew Cooper [Mon, 28 May 2018 14:02:34 +0000 (15:02 +0100)]
x86/vmx: Defer vmx_vmcs_exit() as long as possible in construct_vmcs()

paging_update_paging_modes() and vmx_vlapic_msr_changed() both operate on the
VMCS being constructed.  Avoid dropping and re-acquiring the reference
multiple times.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
(cherry picked from commit f30e3cf34042846e391e3f8361fc6a76d181a7ee)

6 years agox86/vmx: Fix handing of MSR_DEBUGCTL on VMExit
Andrew Cooper [Thu, 24 May 2018 17:20:09 +0000 (17:20 +0000)]
x86/vmx: Fix handing of MSR_DEBUGCTL on VMExit

Currently, whenever the guest writes a nonzero value to MSR_DEBUGCTL, Xen
updates a host MSR load list entry with the current hardware value of
MSR_DEBUGCTL.

On VMExit, hardware automatically resets MSR_DEBUGCTL to 0.  Later, when the
guest writes to MSR_DEBUGCTL, the current value in hardware (0) is fed back
into guest load list.  As a practical result, `ler` debugging gets lost on any
PCPU which has ever scheduled an HVM vcpu, and the common case when `ler`
debugging isn't active, guest actions result in an unnecessary load list entry
repeating the MSR_DEBUGCTL reset.

Restoration of Xen's debugging setting needs to happen from the very first
vmexit.  Due to the automatic reset, Xen need take no action in the general
case, and only needs to load a value when debugging is active.

This could be fixed by using a host MSR load list entry set up during
construct_vmcs().  However, a more efficient option is to use an alternative
block in the VMExit path, keyed on whether hypervisor debugging has been
enabled.

In order to set this up, drop the per cpu ler_msr variable (as there is no
point having it per cpu when it will be the same everywhere), and use a single
read_mostly variable instead.  Split calc_ler_msr() out of percpu_traps_init()
for clarity.

Finally, clean up do_debug().  Reinstate LBR early to help catch cascade
errors, which allows for the removal of the out label.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
(cherry picked from commit 730dc8d2c9e1b6402e66973cf99a7c56bc78be4c)

6 years agox86/spec-ctrl: Yet more fixes for xpti= parsing
Andrew Cooper [Thu, 9 Aug 2018 16:22:17 +0000 (17:22 +0100)]
x86/spec-ctrl: Yet more fixes for xpti= parsing

As it currently stands, 'xpti=dom0' is indistinguishable from the default
value, which means it will be overridden by ARCH_CAPABILITIES_RDCL_NO on fixed
hardware.

Switch opt_xpti to use -1 as a default like all our other related options, and
clobber it as soon as we have a string to parse.

In addition, 'xpti' alone should be interpreted in its positive boolean form,
rather than resulting in a parse error.

  (XEN) parameter "xpti" has invalid value "", rc=-22!

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit 2a3b34ec47817048ab59586855cf0709fc77487e)

6 years agox86/spec-ctrl: Fix the parsing of xpti= on fixed Intel hardware
Andrew Cooper [Mon, 30 Jul 2018 09:57:24 +0000 (11:57 +0200)]
x86/spec-ctrl: Fix the parsing of xpti= on fixed Intel hardware

The calls to xpti_init_default() in parse_xpti() are buggy.  The CPUID data
hasn't been fetched that early, and boot_cpu_has(X86_FEATURE_ARCH_CAPS) will
always evaluate false.

As a result, the default case won't disable XPTI on Intel hardware which
advertises ARCH_CAPABILITIES_RDCL_NO.

Simplify parse_xpti() to solely the setting of opt_xpti according to the
passed string, and have init_speculation_mitigations() call
xpti_init_default() if appropiate.  Drop the force parameter, and pass caps
instead, to avoid redundant re-reading of MSR_ARCH_CAPS.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
master commit: be5e2ff6f54e0245331ed360b8786760f82fd673
master date: 2018-07-24 11:25:54 +0100

6 years agox86/hvm: Disallow unknown MSR_EFER bits
Andrew Cooper [Mon, 30 Jul 2018 09:56:57 +0000 (11:56 +0200)]
x86/hvm: Disallow unknown MSR_EFER bits

It turns out that nothing ever prevented HVM guests from trying to set unknown
EFER bits.  Generally, this results in a vmentry failure.

For Intel hardware, all implemented bits are covered by the checks.

For AMD hardware, the only EFER bit which isn't covered by the checks is TCE
(which AFAICT is specific to AMD Fam15/16 hardware).  We never advertise TCE
in CPUID, but it isn't a security problem to have TCE unexpected enabled in
guest context.

Disallow the setting of bits outside of the EFER_KNOWN_MASK, which prevents
any vmentry failures for guests, yielding #GP instead.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
master commit: ef0269c6215d642a709866f04ba1a1f9f13f3614
master date: 2018-07-24 11:25:53 +0100

6 years agox86/xstate: Make errors in xstate calculations more obvious by crashing the domain
Andrew Cooper [Mon, 30 Jul 2018 09:56:22 +0000 (11:56 +0200)]
x86/xstate: Make errors in xstate calculations more obvious by crashing the domain

If xcr0_max exceeds xfeature_mask, then something is broken with the CPUID
policy derivation or auditing logic.  If hardware rejects new_bv, then
something is broken with Xen's xstate logic.

In both cases, crash the domain with an obvious error message, to help
highlight the issues.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: d6371ccb93012db4ad6615fe666205b86308cb4e
master date: 2018-07-19 19:57:26 +0100

6 years agox86/xstate: Use a guests CPUID policy, rather than allowing all features
Andrew Cooper [Mon, 30 Jul 2018 09:55:46 +0000 (11:55 +0200)]
x86/xstate: Use a guests CPUID policy, rather than allowing all features

It turns out that Xen has never enforced that a domain remain within the
xstate features advertised in CPUID.

The check of new_bv against xfeature_mask ensures that a domain stays within
the set of features that Xen has enabled in hardware (and therefore isn't a
security problem), but this does means that attempts to level a guest for
migration safety might not be effective if the guest ignores CPUID.

Check the CPUID policy in validate_xstate() (for incoming migration) and in
handle_xsetbv() (for guest XSETBV instructions).  This subsumes the PKRU check
for PV guests in handle_xsetbv() (and also demonstrates that I should have
spotted this problem while reviewing c/s fbf9971241f).

For migration, this is correct despite the current (mis)ordering of data
because d->arch.cpuid is the applicable max policy.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 361b835fa00d9f45167c50a60e054ccf22c065d7
master date: 2018-07-19 19:57:26 +0100

6 years agox86/vmx: Don't clobber %dr6 while debugging state is lazy
Andrew Cooper [Mon, 30 Jul 2018 09:55:08 +0000 (11:55 +0200)]
x86/vmx: Don't clobber %dr6 while debugging state is lazy

c/s 4f36452b63 introduced a write to %dr6 in the #DB intercept case, but the
guests debug registers may be lazy at this point, at which point the guests
later attempt to read %dr6 will discard this value and use the older stale
value.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
master commit: 3cdac2805692c7accde2f405d81cc0be799aee48
master date: 2018-07-19 14:06:48 +0100

6 years agox86: command line option to avoid use of secondary hyper-threads
Jan Beulich [Mon, 30 Jul 2018 09:54:37 +0000 (11:54 +0200)]
x86: command line option to avoid use of secondary hyper-threads

Shared resources (L1 cache and TLB in particular) present a risk of
information leak via side channels. Provide a means to avoid use of
hyperthreads in such cases.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: d8f974f1a646c0200b97ebcabb808324b288fadb
master date: 2018-07-19 13:43:33 +0100

6 years agox86: possibly bring up all CPUs even if not all are supposed to be used
Jan Beulich [Mon, 30 Jul 2018 09:53:59 +0000 (11:53 +0200)]
x86: possibly bring up all CPUs even if not all are supposed to be used

Reportedly Intel CPUs which can't broadcast #MC to all targeted
cores/threads because some have CR4.MCE clear will shut down. Therefore
we want to keep CR4.MCE enabled when offlining a CPU, and we need to
bring up all CPUs in order to be able to set CR4.MCE in the first place.

The use of clear_in_cr4() in cpu_mcheck_disable() was ill advised
anyway, and to avoid future similar mistakes I'm removing clear_in_cr4()
altogether right here.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
master commit: 8797d20a6ec2dd75195585a107ce345c51c0a59a
master date: 2018-07-19 13:43:33 +0100

6 years agox86: distinguish CPU offlining from CPU removal
Jan Beulich [Mon, 30 Jul 2018 09:53:21 +0000 (11:53 +0200)]
x86: distinguish CPU offlining from CPU removal

In order to be able to service #MC on offlined CPUs, the GDT, IDT,
stack, and per-CPU data (which includes the TSS) need to be kept
allocated. They should only be freed upon CPU removal (which we
currently don't support, so some code is becoming effectively dead for
the moment).

Note that for now park_offline_cpus doesn't get set to true anywhere -
this is going to be the subject of a subsequent patch.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 2e6c8f182c9c50129b1c7a620242861e6ad6a9fb
master date: 2018-07-19 13:43:33 +0100

6 years agox86/AMD: distinguish compute units from hyper-threads
Jan Beulich [Mon, 30 Jul 2018 09:52:53 +0000 (11:52 +0200)]
x86/AMD: distinguish compute units from hyper-threads

Fam17 replaces CUs by HTs, which we should reflect accordingly, even if
the difference is not very big. The most relevant change (requiring some
code restructuring) is that the topoext feature no longer means there is
a valid CU ID.

Take the opportunity and convert wrongly plain int variables in
set_cpu_sibling_map() to unsigned int.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Brian Woods <brian.woods@amd.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 9429b07a0af7f92a5f25e4068e11db881e157495
master date: 2018-07-19 09:42:42 +0200

6 years agocpupools: fix state when downing a CPU failed
Jan Beulich [Mon, 30 Jul 2018 09:52:25 +0000 (11:52 +0200)]
cpupools: fix state when downing a CPU failed

While I've run into the issue with further patches in place which no
longer guarantee the per-CPU area to start out as all zeros, the
CPU_DOWN_FAILED processing looks to have the same issue: By not zapping
the per-CPU cpupool pointer, cpupool_cpu_add()'s (indirect) invocation
of schedule_cpu_switch() will trigger the "c != old_pool" assertion
there.

Clearing the field during CPU_DOWN_PREPARE is too early (afaict this
should not happen before cpu_disable_scheduler()). Clearing it in
CPU_DEAD and CPU_DOWN_FAILED would be an option, but would take the same
piece of code twice. Since the field's value shouldn't matter while the
CPU is offline, simply clear it (implicitly) for CPU_ONLINE and
CPU_DOWN_FAILED, but only for other than the suspend/resume case (which
gets specially handled in cpupool_cpu_remove()).

By adjusting the conditional in cpupool_cpu_add() CPU_DOWN_FAILED
handling in the suspend case should now also be handled better.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
master commit: cb1ae9a27819cea0c5008773c68a7be6f37eb0e5
master date: 2018-07-19 09:41:55 +0200

6 years agox86/svm Fixes and cleanup to svm_inject_event()
Andrew Cooper [Mon, 30 Jul 2018 09:51:50 +0000 (11:51 +0200)]
x86/svm Fixes and cleanup to svm_inject_event()

 * State adjustments (and debug tracing) for #DB/#BP/#PF should not be done
   for `int $n` instructions.  Updates to %cr2 occur even if the exception
   combines to #DF.
 * Don't opencode DR_STEP when updating %dr6.
 * Simplify the logic for calling svm_emul_swint_injection() as in the common
   case, every condition needs checking.
 * Fix comments which have become stale as code has moved between components.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
master commit: 8dab867c81ede455009028a9a88edc4ff3b9da88
master date: 2018-07-17 10:12:40 +0100

6 years agoallow cpu_down() to be called earlier
Jan Beulich [Mon, 30 Jul 2018 09:51:19 +0000 (11:51 +0200)]
allow cpu_down() to be called earlier

The function's use of the stop-machine logic has so far prevented its
use ahead of the processing of the "ordinary" initcalls. Since at this
early time we're in a controlled environment anyway, there's no need for
such a heavy tool. Additionally this ought to have less of a performance
impact especially on large systems, compared to the alternative of
making stop-machine functionality available earlier.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 5894c0a2da66243a89088d309c7e1ea212ab28d6
master date: 2018-07-16 15:15:12 +0200

6 years agoxen: oprofile/nmi_int.c: Drop unwanted sexual reference
Ian Jackson [Mon, 30 Jul 2018 09:50:50 +0000 (11:50 +0200)]
xen: oprofile/nmi_int.c: Drop unwanted sexual reference

This is not really very nice.

This line doesn't have much value in itself.  The rest of this comment
block is pretty clear what it wants to convey.  So delete it.

(While we are here, adopt the CODING_STYLE-mandated formatting.)

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
Acked-by: Lars Kurth <lars.kurth.xen@gmail.com>
Acked-by: George Dunlap <dunlapg@umich.edu
Acked-by: Jan Beulich <JBeulich@suse.com>
master commit: 41cb2db62627a7438d938aae487550c3f4acb1da
master date: 2018-07-12 16:38:30 +0100

6 years agox86/spec-ctrl: command line handling adjustments
Jan Beulich [Mon, 30 Jul 2018 09:50:21 +0000 (11:50 +0200)]
x86/spec-ctrl: command line handling adjustments

For one, "no-xen" should not imply "no-eager-fpu", as "eager FPU" mode
is to guard guests, not Xen itself, which is also expressed so by
print_details().

And then opt_ssbd, despite being off by default, should also be cleared
by the "no" and "no-xen" sub-options.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: ac3f9a72141a48d40fabfff561d5a7dc0e1b810d
master date: 2018-07-10 12:22:31 +0200

6 years agox86: correctly set nonlazy_xstate_used when loading full state
Jan Beulich [Mon, 30 Jul 2018 09:49:41 +0000 (11:49 +0200)]
x86: correctly set nonlazy_xstate_used when loading full state

In this case, just like xcr0_accum, nonlazy_xstate_used should always be
set to the intended new value, rather than possibly leaving the flag set
from a prior state load.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: f46bf0e101ca63118b9db2616e8f51e972d7f563
master date: 2018-07-09 10:51:02 +0200

6 years agoxen: Port the array_index_nospec() infrastructure from Linux
Andrew Cooper [Mon, 30 Jul 2018 09:48:25 +0000 (11:48 +0200)]
xen: Port the array_index_nospec() infrastructure from Linux

This is as the infrastructure appeared in Linux 4.17, adapted slightly for
Xen.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Julien Grall <julien.grall@arm.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
master commit: 2ddfae51d8b1d7b8cd33a4f6ad4d16d27cb869ae
master date: 2018-07-06 16:49:57 +0100

6 years agocmdline: fix parse_boolean() for NULL incoming end pointer
Jan Beulich [Mon, 30 Jul 2018 09:47:06 +0000 (11:47 +0200)]
cmdline: fix parse_boolean() for NULL incoming end pointer

Use the calculated lengths instead of pointers, as 'e' being NULL will
otherwise cause undue parsing failures.

Reported-by: Karl Johnson <karljohnson.it@gmail.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
6 years agotools: prepend to PKG_CONFIG_PATH when configuring qemu
Stewart Hildebrand [Thu, 26 Apr 2018 17:41:08 +0000 (17:41 +0000)]
tools: prepend to PKG_CONFIG_PATH when configuring qemu

A user may choose to set his/her own PKG_CONFIG_PATH, which is useful in the
case of cross-compiling.  We don't want to completely override the
PKG_CONFIG_PATH, just add to it.

Signed-off-by: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Release-acked-by: Juergen Gross <jgross@suse.com>
(cherry picked from commit 08641a9e8870d3b174d95aaa55ecba43387563b5)
(cherry picked from commit 8dd2cb99f7bf3992282e0b1e4732eaa4639954c2)

6 years agox86/EFI: further correct FPU state handling around runtime calls
Jan Beulich [Wed, 4 Jul 2018 10:37:36 +0000 (12:37 +0200)]
x86/EFI: further correct FPU state handling around runtime calls

We must not leave a vCPU with CR0.TS clear when it is not in fully eager
mode and has not touched non-lazy state. Instead of adding a 3rd
invocation of stts() to vcpu_restore_fpu_eager(), consolidate all of
them into a single one done at the end of the function.

Rename the function at the same time to better reflect its purpose, as
the patches touches all of its occurences anyway.

The new function parameter is not really well named, but
"need_stts_if_not_fully_eager" seemed excessive to me.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
master commit: 23839a0fa0bbe78c174cd2bb49083e153f0f99df
master date: 2018-06-26 15:23:08 +0200

6 years agox86/HVM: attempts to emulate FPU insns need to set fpu_initialised
Jan Beulich [Wed, 4 Jul 2018 10:36:54 +0000 (12:36 +0200)]
x86/HVM: attempts to emulate FPU insns need to set fpu_initialised

My original way of thinking here was that this would be set anyway at
the point state gets reloaded after the adjustments hvmemul_put_fpu()
does, but the flag should already be set before that - after all the
guest may never again touch the FPU before e.g. getting migrated/saved.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Paul Durrant <paul.durrant@citrix.com>
master commit: 3310e3cd648f3713c824790bd71d8ec405a09d05
master date: 2018-06-26 08:41:08 +0200

6 years agox86/EFI: fix FPU state handling around runtime calls
Jan Beulich [Wed, 4 Jul 2018 10:36:25 +0000 (12:36 +0200)]
x86/EFI: fix FPU state handling around runtime calls

There are two issues.  First, the nonlazy xstates were never restored
after returning from the runtime call.

Secondly, with the fully_eager_fpu mitigation for XSA-267 / LazyFPU, the
unilateral stts() is no longer correct, and hits an assertion later when
a lazy state restore tries to occur for a fully eager vcpu.

Fix both of these issues by calling vcpu_restore_fpu_eager().  As EFI
runtime services can be used in the idle context, the idle assertion
needs to move until after the fully_eager_fpu check.

Introduce a "curr" local variable and replace other uses of "current"
at the same time.

Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Tested-by: Juergen Gross <jgross@suse.com>
master commit: 437211cb696515ee5bd5dae0ab72866c9f382a33
master date: 2018-06-21 11:35:46 +0200

6 years agox86/VT-x: Fix printing of EFER in vmcs_dump_vcpu()
Andrew Cooper [Wed, 4 Jul 2018 10:35:54 +0000 (12:35 +0200)]
x86/VT-x: Fix printing of EFER in vmcs_dump_vcpu()

This is essentially a "take 2" of c/s 82540b66ce "x86/VT-x: Fix determination
of EFER.LMA in vmcs_dump_vcpu()" because in hindight, that change was more
problematic than useful.

The original reason was to fix the logic for determining when not to print the
PDPTE pointers.  However, mutating the efer variable (particularly LME and
LMA) before printing it interferes with diagnosing vmentry failures.

Instead of modifying efer, change the PDPTE conditional to use
VM_ENTRY_IA32E_MODE.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
master commit: 35fcb982ea16c40619fee8bba4789a94d824521e
master date: 2018-06-05 11:55:51 +0100

6 years agox86/traps: Fix error handling of the pv %dr7 shadow state
Andrew Cooper [Wed, 4 Jul 2018 10:35:16 +0000 (12:35 +0200)]
x86/traps: Fix error handling of the pv %dr7 shadow state

c/s "x86/pv: Introduce and use x86emul_write_dr()" fixed a bug with IO shadow
handling, in that it remained stale and visible until %dr7.L/G got set again.

However, it neglected the -EPERM return inbetween these two hunks, introducing
a different bug in which a write to %dr7 which tries to set IO breakpoints
without %cr4.DE being set clobbers the IO state, rather than leaves it alone.

Instead, move the zeroing slightly later, which guarentees that the shadow
gets written exactly once, on a successful update to %dr7.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 237c31b5a1d5aa88cdb59b8c31b1b62eb13e82d1
master date: 2018-06-04 11:05:45 +0100

6 years agox86/CPUID: don't override tool stack decision to hide STIBP
Jan Beulich [Wed, 4 Jul 2018 10:34:36 +0000 (12:34 +0200)]
x86/CPUID: don't override tool stack decision to hide STIBP

Other than in the feature sets, where we indeed want to offer the
feature even if not enumerated on hardware, we shouldn't dictate the
feature being available if tool stack or host admin have decided to not
expose it (for whatever [questionable?] reason). That feature set side
override is sufficient to achieve the intended guest side safety
property (in offering - by default - STIBP independent of actual
availability in hardware).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 06f542f8f2e446c01bd0edab51e9450af7f6e05b
master date: 2018-05-29 12:39:24 +0200