Wei Liu [Wed, 28 Nov 2018 17:43:33 +0000 (17:43 +0000)]
tools/firmware: update OVMF Makefile, when necessary
[ This is two commits from master aka staging-4.12: ]
OVMF has become dependent on OpenSSL, which is included as a
submodule. Initialise submodules before building.
Signed-off-by: Wei Liu <wei.liu2@citrix.com> Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
(cherry picked from commit b16281870e06f5f526029a4e69634a16dc38e8e4)
tools: only call git when necessary in OVMF Makefile
Users may choose to export a snapshot of OVMF and build it
with xen.git supplied ovmf-makefile. In that case we don't
need to call `git submodule`.
Andrew Cooper [Mon, 18 Mar 2019 16:09:08 +0000 (17:09 +0100)]
x86/pv: Fix construction of 32bit dom0's
dom0_construct_pv() has logic to transition dom0 into a compat domain when
booting an ELF32 image.
One aspect which is missing is the CPUID policy recalculation, meaning that a
32bit dom0 sees a 64bit policy, which differ by the Long Mode feature flag in
particular. Another missing item is the x87_fip_width initialisation.
Update dom0_construct_pv() to use switch_compat(), rather than retaining the
opencoding. Position the call to switch_compat() such that the compat32 local
variable can disappear entirely.
The 32bit monitor table is now created by setup_compat_l4(), avoiding the need
to for manual creation later.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Wei Liu <wei.liu2@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 356f437171c5bb90701ac9dd7ba4dbbd05988e38
master date: 2019-03-15 14:59:27 +0000
Andrew Cooper [Mon, 18 Mar 2019 16:08:25 +0000 (17:08 +0100)]
x86/tsx: Implement controls for RTM force-abort mode
The CPUID bit and MSR are deliberately not exposed to guests, because they
won't exist on newer processors. As vPMU isn't security supported, the
misbehaviour of PCR3 isn't expected to impact production deployments.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 6be613f29b4205349275d24367bd4c82fb2960dd
master date: 2019-03-12 17:05:21 +0000
Andrew Cooper [Mon, 18 Mar 2019 16:07:45 +0000 (17:07 +0100)]
x86/vtd: Don't include control register state in the table pointers
iremap_maddr and qinval_maddr point to the base of a block of contiguous RAM,
allocated by the driver, holding the Interrupt Remapping table, and the Queued
Invalidation ring.
Despite their name, they are actually the values of the hardware register,
including control metadata in the lower 12 bits. While uses of these fields
do appear to correctly shift out the metadata, this is very subtle behaviour
and confusing to follow.
Nothing uses the metadata, so make the fields actually point at the base of
the relevant tables.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Kevin Tian <kevin.tian@intel.com>
master commit: a9a05aeee10a5a3763a41305a9f38112dd1fcc82
master date: 2019-03-12 13:57:13 +0000
Jan Beulich [Mon, 18 Mar 2019 16:07:11 +0000 (17:07 +0100)]
x86/HVM: don't crash guest in hvmemul_find_mmio_cache()
Commit 35a61c05ea ("x86emul: adjust handling of AVX2 gathers") builds
upon the fact that the domain will actually survive running out of MMIO
result buffer space. Drop the domain_crash() invocation. Also delay
incrementing of the usage counter, such that the function can't possibly
use/return an out-of-bounds slot/pointer in case execution subsequently
makes it into the function again without a prior reset of state.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
master commit: a43c1dec246bdee484e6a3de001cc6850a107abe
master date: 2019-03-12 14:39:46 +0100
Igor Druzhinin [Mon, 18 Mar 2019 16:06:37 +0000 (17:06 +0100)]
iommu: leave IOMMU enabled by default during kexec crash transition
It's unsafe to disable IOMMU on a live system which is the case
if we're crashing since remapping hardware doesn't usually know what
to do with ongoing bus transactions and frequently raises NMI/MCE/SMI,
etc. (depends on the firmware configuration) to signal these abnormalities.
This, in turn, doesn't play well with kexec transition process as there is
no handling available at the moment for this kind of events resulting
in failures to enter the kernel.
Modern Linux kernels taught to copy all the necessary DMAR/IR tables
following kexec from the previous kernel (Xen in our case) - so it's
currently normal to keep IOMMU enabled. It might require minor changes to
kdump command line that enables IOMMU drivers (e.g. intel_iommu=on /
intremap=on) but recent kernels don't require any additional changes for
the transition to be transparent.
A fallback option is still left for compatibility with ancient crash
kernels which didn't like to have IOMMU active under their feet on boot.
Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com>
master commit: 12c36f577d454996c882ecdc5da8113ca2613646
master date: 2019-03-12 14:38:12 +0100
Jan Beulich [Mon, 18 Mar 2019 16:05:45 +0000 (17:05 +0100)]
x86/cpuid: add missing PCLMULQDQ dependency
Since we can't seem to be able to settle our discussion for the wider
adjustment previously posted, let's at least add the missing dependency
for 4.12. I'm not convinced though that attaching it to SSE is correct.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: eeb31ee522c7bb8541eb4c037be2c42bfcf0a3c3
master date: 2019-03-05 18:04:23 +0100
Jan Beulich [Mon, 18 Mar 2019 16:05:07 +0000 (17:05 +0100)]
x86/mm: fix #GP(0) in switch_cr3_cr4()
With "pcid=no-xpti" and opposite XPTI settings in two 64-bit PV domains
(achievable with one of "xpti=no-dom0" or "xpti=no-domu"), switching
from a PCID-disabled to a PCID-enabled 64-bit PV domain fails to set
CR4.PCIDE in time, as CR4.PGE would not be set in either (see
pv_fixup_guest_cr4(), in particular as used by write_ptbase()), and
hence the early CR4 write would be skipped.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: fdc2056767ba74346dfd8bbe868bb22521ba1418
master date: 2019-03-05 17:02:36 +0100
Igor Druzhinin [Mon, 18 Mar 2019 16:04:30 +0000 (17:04 +0100)]
x86/nmi: correctly check MSB of P6 performance counter MSR in watchdog
The logic currently tries to work out if a recent overflow (that indicates
that NMI comes from the watchdog) happened by checking MSB of performance
counter MSR that is initially sign extended from a negative value
that we program it to. A possibly incorrect assumption here is that
MSB is always bit 32 while on modern hardware it's usually 47 and
the actual bit-width is reported through CPUID. Checking bit 32 for
overflows is usually fine since we never program it to anything
exceeding 32-bits and NMI is handled shortly after overflow occurs.
A problematic scenario that we saw occurs on systems where SMIs taking
significant time are possible. In that case, NMI handling is deferred to
the point firmware exits SMI which might take enough time for the counter
to go through bit 32 and set it to 1 again. So the logic described above
will misread it and report an unknown NMI erroneously.
Fortunately, we can use the actual MSB, which is usually higher than the
currently hardcoded 32, and treat this case correctly at least on modern
hardware.
Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 0452d02b6e7849537914dd30cbfc8eb27cdad2ce
master date: 2019-02-28 13:44:40 +0000
A triple fault is fatal to the domain under all circumstances (so will print
at most once), and in practice is always an error condition rather than a
reboot fallback.
Reported-by: Razvan Cojocaru <rcojocaru@bitdefender.com> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com>
master commit: 1c8ca185e3c6e003398471edd9dbac0cd118137c
master date: 2019-02-28 11:16:27 +0000
Andrew Cooper [Mon, 18 Mar 2019 16:02:49 +0000 (17:02 +0100)]
x86/vmx: Properly flush the TLB when an altp2m is modified
Modifications to an altp2m mark the p2m as needing flushing, but this was
never wired up in the return-to-guest path. As a result, stale TLB entries
can remain after resuming the guest.
In practice, this manifests as a missing EPT_VIOLATION or #VE exception when
the guest subsequently accesses a page which has had its permissions reduced.
vmx_vmenter_helper() now has 11 p2ms to potentially invalidate, but issuing 11
INVEPT instructions isn't clever. Instead, count how many contexts need
invalidating, and use INVEPT_ALL_CONTEXT if two or more are in need of
flushing.
This doesn't have an XSA because altp2m is not yet a security-supported
feature.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Razvan Cojocaru <rcojocaru@bitdefender.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Kevin Tian <kevin.tian@intel.com>
master commit: 69f7643df68ef8e994221a996e336a47cbb7bbc8
master date: 2019-02-28 11:16:27 +0000
Jan Beulich [Mon, 18 Mar 2019 16:02:02 +0000 (17:02 +0100)]
x86/shadow: don't use map_domain_page_global() on paths that may not fail
The assumption (according to one comment) and hope (according to
another) that map_domain_page_global() can't fail are both wrong on
large enough systems. Do away with the guest_vtable field altogether,
and establish / tear down the desired mapping as necessary.
The alternatives, discarded as being undesirable, would have been to
either crash the guest in sh_update_cr3() when the mapping fails, or to
bubble up an error indicator, which upper layers would have a hard time
to deal with (other than again by crashing the guest).
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Tim Deegan <tim@xen.org>
master commit: 43282a5e64da26fad544e0100abf35048cf65b46
master date: 2019-02-26 16:56:26 +0100
Paul Durrant [Mon, 18 Mar 2019 16:01:21 +0000 (17:01 +0100)]
viridian: fix the HvFlushVirtualAddress/List hypercall implementation
The current code uses hvm_asid_flush_vcpu() but this is insufficient for
a guest running in shadow mode, which results in guest crashes early in
boot if the 'hcall_remote_tlb_flush' is enabled.
This patch, instead of open coding a new flush algorithm, adapts the one
already used by the HVMOP_flush_tlbs Xen hypercall. The implementation is
modified to allow TLB flushing a subset of a domain's vCPUs. A callback
function determines whether or not a vCPU requires flushing. This mechanism
was chosen because, while it is the case that the currently implemented
viridian hypercalls specify a vCPU mask, there are newer variants which
specify a sparse HV_VP_SET and thus use of a callback will avoid needing to
expose details of this outside of the viridian subsystem if and when those
newer variants are implemented.
NOTE: Use of the common flush function requires that the hypercalls are
restartable and so, with this patch applied, viridian_hypercall()
can now return HVM_HCALL_preempted. This is safe as no modification
to struct cpu_user_regs is done before the return.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: ce98ee3050a824994ce4957faa8f53ecb8c7da9d
master date: 2019-02-26 16:55:06 +0100
Jan Beulich [Mon, 18 Mar 2019 16:00:28 +0000 (17:00 +0100)]
x86/shadow: don't pass wrong L4 MFN to guest_walk_tables()
64-bit PV guest user mode runs on a different L4 table. Make sure
- the accessed bit gets set in the correct table (and in log-dirty
mode the correct page gets marked dirty) during guest walks,
- the correct table gets audited by sh_audit_gw(),
- correct info gets logged by print_gw().
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: George Dunlap <george.dunlap@citrix.com>
master commit: db2af23d15077605f286d8ef86c8f5d9c1b8302a
master date: 2019-02-20 17:07:17 +0100
Varad Gautam [Mon, 18 Mar 2019 15:59:43 +0000 (16:59 +0100)]
x86/pmtimer: fix hvm_acpi_sleep_button behavior
Commit 19fb14622e941 "x86/pmtimer: move ACPI registers from PMTState to
hvm_domain" misconfigures pm1a_sts for hvm_acpi_sleep_button with
PWRBTN_STS instead of SLPBTN_STS, which leads to
XEN_DOMCTL_SENDTRIGGER_SLEEP causing guest powerdowns. Fix this.
Signed-off-by: Varad Gautam <vrd@amazon.de> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: b22c900c44a2db8db1c53e269e152206e55c273f
master date: 2019-02-20 17:06:25 +0100
Jan Beulich [Tue, 5 Mar 2019 14:06:47 +0000 (15:06 +0100)]
x86/pv: _toggle_guest_pt() may not skip TLB flush for shadow mode guests
For shadow mode guests (e.g. PV ones forced into that mode as L1TF
mitigation, or during migration) update_cr3() -> sh_update_cr3() may
result in a change to the (shadow) root page table (compared to the
previous one when running the same vCPU with the same PCID). This can,
first and foremost, be a result of memory pressure on the shadow memory
pool of the domain. Shadow code legitimately relies on the original
(prior to commit 5c81d260c2 ["xen/x86: use PCID feature"]) behavior of
the subsequent CR3 write to flush the TLB of entries still left from
walks with an earlier, different (shadow) root page table.
Restore the flushing behavior, also for the second CR3 write on the exit
path to guest context when XPTI is active. For the moment accept that
this will introduce more flushes than are strictly necessary - no flush
would be needed when the (shadow) root page table doesn't actually
change, but this information isn't readily (i.e. without introducing a
layering violation) available here.
This is XSA-294.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Tested-by: Juergen Gross <jgross@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 329b00e4d49f70185561d7cc4b076c77869888a0
master date: 2019-03-05 13:54:42 +0100
Andrew Cooper [Tue, 5 Mar 2019 14:05:50 +0000 (15:05 +0100)]
x86/pv: Don't have %cr4.fsgsbase active behind a guest kernels back
Currently, a 64bit PV guest can appear to set and clear FSGSBASE in %cr4, but
the bit remains set in hardware. Therefore, the {RD,WR}{FS,GS}BASE are usable
even when the guest kernel believes that they are disabled.
The FSGSBASE feature isn't currently supported in Linux, and its context
switch path has some optimisations which rely on userspace being unable to use
the WR{FS,GS}BASE instructions. Xen's current behaviour undermines this
expectation.
In 64bit PV guest context, always load the guest kernels setting of FSGSBASE
into %cr4. This requires adjusting how Xen uses the {RD,WR}{FS,GS}BASE
instructions.
* Delete the cpu_has_fsgsbase helper. It is no longer safe, as users need to
check %cr4 directly.
* The raw __rd{fs,gs}base() helpers are only safe to use when %cr4.fsgsbase
is set. Comment this property.
* The {rd,wr}{fs,gs}{base,shadow}() and read_msr() helpers are updated to use
the current %cr4 value to determine which mechanism to use.
* toggle_guest_mode() and save_segments() are update to avoid reading
fs/gsbase if the values in hardware cannot be stale WRT struct vcpu. A
consequence of this is that the write_cr() path needs to cache the current
bases, as subsequent context switches will skip saving the values.
* write_cr4() is updated to ensure that the shadow %cr4.fsgsbase value is
observed in a safe way WRT the hardware setting, if an interrupt happens to
hit in the middle.
* load_segments() is updated to use the VMLOAD optimisation if FSGSBASE is
unavailable, even if only gs_shadow needs updating. As a minor perf
improvement, check cpu_has_svm first to short circuit a context-dependent
conditional on Intel hardware.
* pv_make_cr4() is updated for 64bit PV guests to use the guest kernels
choice of FSGSBASE.
This is part of XSA-293.
Reported-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: eccc170053e46b4ab1d9e7485c09e210be15bbd7
master date: 2019-03-05 13:54:05 +0100
Andrew Cooper [Tue, 5 Mar 2019 14:04:54 +0000 (15:04 +0100)]
x86/pv: Rewrite guest %cr4 handling from scratch
The PV cr4 logic is almost impossible to follow, and leaks bits into guest
context which definitely shouldn't be visible (in particular, VMXE).
The biggest problem however, and source of the complexity, is that it derives
new real and guest cr4 values from the current value in hardware - this is
context dependent and an inappropriate source of information.
Rewrite the cr4 logic to be invariant of the current value in hardware.
First of all, modify write_ptbase() to always use mmu_cr4_features for IDLE
and HVM contexts. mmu_cr4_features *is* the correct value to use, and makes
the ASSERT() obviously redundant.
For PV guests, curr->arch.pv.ctrlreg[4] remains the guests view of cr4, but
all logic gets reworked in terms of this and mmu_cr4_features only.
Two masks are introduced; bits which the guest has control over, and bits
which are forwarded from Xen's settings. One guest-visible change here is
that Xen's VMXE setting is no longer visible at all.
pv_make_cr4() follows fairly closely from pv_guest_cr4_to_real_cr4(), but
deliberately starts with mmu_cr4_features, and only alters the minimal subset
of bits.
The boot-time {compat_,}pv_cr4_mask variables are removed, as they are a
remnant of the pre-CPUID policy days. pv_fixup_guest_cr4() gains a related
derivation from the policy.
Another guest visible change here is that a 32bit PV guest can now flip
FSGSBASE in its view of CR4. While the {RD,WR}{FS,GS}BASE instructions are
unusable outside of a 64bit code segment, the ability to modify FSGSBASE
matches real hardware behaviour, and avoids the need for any 32bit/64bit
differences in the logic.
Overall, this patch shouldn't have a practical change in guest behaviour.
VMXE will disappear from view, and an inquisitive 32bit kernel can now see
FSGSBASE changing, but this new logic is otherwise bug-compatible with before.
This is part of XSA-293.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: b2dd00574a4fc87ca964177f8e752a968c27efb2
master date: 2019-03-05 13:53:32 +0100
Jan Beulich [Tue, 5 Mar 2019 14:04:21 +0000 (15:04 +0100)]
x86/mm: properly flush TLB in switch_cr3_cr4()
The CR3 values used for contexts run with PCID enabled uniformly have
CR3.NOFLUSH set, resulting in the CR3 write itself to not cause any
flushing at all. When the second CR4 write is skipped or doesn't do any
flushing, there's nothing so far which would purge TLB entries which may
have accumulated again if the PCID doesn't change; the "just in case"
flush only affects the case where the PCID actually changes. (There may
be particularly many TLB entries re-accumulated in case of a watchdog
NMI kicking in during the critical time window.)
Suppress the no-flush behavior of the CR3 write in this particular case.
Similarly the second CR4 write may not cause any flushing of TLB entries
established again while the original PCID was still in use - it may get
performed because of unrelated bits changing. The flush of the old PCID
needs to happen nevertheless.
At the same time also eliminate a possible race with lazy context
switch: Just like for CR4, CR3 may change at any time while interrupts
are enabled, due to the __sync_local_execstate() invocation from the
flush IPI handler. It is for that reason that the CR3 read, just like
the CR4 one, must happen only after interrupts have been turned off.
This is XSA-292.
Reported-by: Sergey Dyasli <sergey.dyasli@citrix.com> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Tested-by: Sergey Dyasli <sergey.dyasli@citrix.com>
master commit: 6e5f22ba437d78c0a84b9673f7e2cfefdbc62f4b
master date: 2019-03-05 13:52:44 +0100
Jan Beulich [Tue, 5 Mar 2019 14:03:43 +0000 (15:03 +0100)]
x86/mm: don't retain page type reference when IOMMU operation fails
The IOMMU update in _get_page_type() happens between recording of the
new reference and validation of the page for its new type (if
necessary). If the IOMMU operation fails, there's no point in actually
carrying out validation. Furthermore, with this resulting in failure
getting indicated to the caller, the recorded type reference also needs
to be dropped again.
Note that in case of failure of alloc_page_type() there's no need to
undo the IOMMU operation: Only special types get handed to the function.
The function, upon failure, clears ->u.inuse.type_info, effectively
converting the page to PGT_none. The IOMMU mapping, however, solely
depends on whether the type is PGT_writable_page.
This is XSA-291.
Reported-by: Igor Druzhinin <igor.druzhinin@citrix.com> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: fad0de986220c46e70be2f83279961aad7394af0
master date: 2019-03-05 13:52:15 +0100
Jan Beulich [Tue, 5 Mar 2019 14:02:19 +0000 (15:02 +0100)]
x86/mm: add explicit preemption checks to L3 (un)validation
When recursive page tables are used at the L3 level, unvalidation of a
single L4 table may incur unvalidation of two levels of L3 tables, i.e.
a maximum iteration count of 512^3 for unvalidating an L4 table. The
preemption check in free_l2_table() as well as the one in
_put_page_type() may never be reached, so explicit checking is needed in
free_l3_table().
When recursive page tables are used at the L4 level, the iteration count
at L4 alone is capped at 512^2. As soon as a present L3 entry is hit
which itself needs unvalidation (and hence requiring another nested loop
with 512 iterations), the preemption checks added here kick in, so no
further preemption checking is needed at L4 (until we decide to permit
5-level paging for PV guests).
The validation side additions are done just for symmetry.
This is part of XSA-290.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: bac4567a67d5e8b916801ea5a04cf8b443dfb245
master date: 2019-03-05 13:51:46 +0100
Jan Beulich [Tue, 5 Mar 2019 14:01:39 +0000 (15:01 +0100)]
x86/mm: also allow L2 (un)validation to be fully preemptible
Commit c612481d1c ("x86/mm: Plumbing to allow any PTE update to fail
with -ERESTART") added assertions next to the {alloc,free}_l2_table()
invocations to document (and validate in debug builds) that L2
(un)validations are always preemptible.
The assertion in free_page_type() was now observed to trigger when
recursive L2 page tables get cleaned up.
In particular put_page_from_l2e()'s assumption that _put_page_type()
would always succeed is now wrong, resulting in a partially un-validated
page left in a domain, which has no other means of getting cleaned up
later on. If not causing any problems earlier, this would ultimately
trigger the check for ->u.inuse.type_info having a zero count when
freeing the page during cleanup after the domain has died.
As a result it should be considered a mistake to not have extended
preemption fully to L2 when it was added to L3/L4 table handling, which
this change aims to correct.
The validation side additions are done just for symmetry.
This is part of XSA-290.
Reported-by: Manuel Bouyer <bouyer@antioche.eu.org> Tested-by: Manuel Bouyer <bouyer@antioche.eu.org> Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 176ebf9c8bc2828f6637eb61cc1cf166e302c699
master date: 2019-03-05 13:51:18 +0100
George Dunlap [Tue, 5 Mar 2019 14:01:09 +0000 (15:01 +0100)]
xen: Make coherent PV IOMMU discipline
In order for a PV domain to set up DMA from a passed-through device to
one of its pages, the page must be mapped in the IOMMU. On the other
hand, before a PV page may be used as a "special" page type (such as a
pagetable or descriptor table), it _must not_ be writable in the IOMMU
(otherwise a malicious guest could DMA arbitrary page tables into the
memory, bypassing Xen's safety checks); and Xen's current rule is to
have such pages not in the IOMMU at all.
At the moment, in order to accomplish this, the code borrows HVM
domain's "physmap" concept: When a page is assigned to a guest,
guess_physmap_add_entry() is called, which for PV guests, will create
a writable IOMMU mapping; and when a page is removed,
guest_physmap_remove_entry() is called, which will remove the mapping.
Additionally, when a page gains the PGT_writable page type, the page
will be added into the IOMMU; and when the page changes away from a
PGT_writable type, the page will be removed from the IOMMU.
Unfortunately, borrowing the "physmap" concept from HVM domains is
problematic. HVM domains have a lock on their p2m tables, ensuring
synchronization between modifications to the p2m; and all hypercall
parameters must first be translated through the p2m before being used.
Trying to mix this locked-and-gated approach with PV's lock-free
approach leads to several races and inconsistencies:
* A race between a page being assigned and it being put into the
physmap; for example:
- P1: call populate_physmap() { A = allocate_domheap_pages() }
- P2: Guess page A's mfn, and call decrease_reservation(A). A is owned by the domain,
and so Xen will clear the PGC_allocated bit and free the page
- P1: finishes populate_physmap() { guest_physmap_add_entry() }
Now the domain has a writable IOMMU mapping to a page it no longer owns.
* Pages start out as type PGT_none, but with a writable IOMMU mapping.
If a guest uses a page as a page table without ever having created a
writable mapping, the IOMMU mapping will not be removed; the guest
will have a writable IOMMU mapping to a page it is currently using
as a page table.
* A newly-allocated page can be DMA'd into with no special actions on
the part of the guest; However, if a page is promoted to a
non-writable type, the page must be mapped with a writable type before
DMA'ing to it again, or the transaction will fail.
To fix this, do away with the "PV physmap" concept entirely, and
replace it with the following IOMMU discipline for PV guests:
- (type == PGT_writable) <=> in iommu (even if type_count == 0)
- Upon a final put_page(), check to see if type is PGT_writable; if so,
iommu_unmap.
In order to achieve that:
- Remove PV IOMMU related code from guest_physmap_*
- Repurpose cleanup_page_cacheattr() into a general
cleanup_page_mappings() function, which will both fix up Xen
mappings for pages with special cache attributes, and also check for
a PGT_writable type and remove pages if appropriate.
- For compatibility with current guests, grab-and-release a
PGT_writable_page type for PV guests in guest_physmap_add_entry().
This will cause most "normal" guest pages to start out life with
PGT_writable_page type (and thus an IOMMU mapping), but no type
count (so that they can be used as special cases at will).
Also, note that there is one exception to to the "PGT_writable => in
iommu" rule: xenheap pages shared with guests may be given a
PGT_writable type with one type reference. This reference prevents
the type from changing, which in turn prevents page from gaining an
IOMMU mapping in get_page_type(). It's not clear whether this was
intentional or not, but it's not something to change in a security
update.
This is XSA-288.
Reported-by: Paul Durrant <paul.durrant@citrix.com> Signed-off-by: George Dunlap <george.dunlap@citrix.com> Signed-off-by: Jan Beulich <jbeulich@suse.com>
master commit: fe21b78ef99a1b505cfb6d3789ede9591609dd70
master date: 2019-03-05 13:48:32 +0100
steal_page, in a misguided attempt to protect against unknown races,
violates both of these rules, thus introducing other races:
- Temporarily, the count_info has the refcount go to 0 while
PGC_allocated is set
- It explicitly returns the page PGC_allocated set, but owner == NULL
and page not on the page_list.
The second one meant that page_get_owner_and_reference() could return
NULL even after having successfully grabbed a reference on the page,
leading the caller to leak the reference (since "couldn't get ref" and
"got ref but no owner" look the same).
Furthermore, rather than grabbing a page reference to ensure that the
owner doesn't change under its feet, it appears to rely on holding
d->page_alloc lock to prevent this.
Unfortunately, this is ineffective: page->owner remains non-NULL for
some time after the count has been set to 0; meaning that it would be
entirely possible for the page to be freed and re-allocated to a
different domain between the page_get_owner() check and the count_info
check.
Modify steal_page to instead follow the appropriate access discipline,
taking the page through series of states similar to being freed and
then re-allocated with MEMF_no_owner:
- Grab an extra reference to make sure we don't race with anyone else
freeing the page
- Drop both references and PGC_allocated atomically, so that (if
successful), anyone else trying to grab a reference will fail
- Attempt to reset Xen's mappings
- Reset the rest of the state.
Then, modify the two callers appropriately:
- Leave count_info alone (it's already been cleared)
- Call free_domheap_page() directly if appropriate
- Call assign_pages() rather than open-coding a partial assign
With all callers to assign_pages() now passing in pages with the
type_info field clear, tighten the respective assertion there.
This is XSA-287.
Signed-off-by: George Dunlap <george.dunlap@citrix.com> Signed-off-by: Jan Beulich <jbeulich@suse.com>
master commit: 3d4868a481eebed232eeacba36ea28e5dee5e946
master date: 2019-03-05 13:48:08 +0100
Jan Beulich [Tue, 5 Mar 2019 13:59:48 +0000 (14:59 +0100)]
IOMMU/x86: fix type ref-counting race upon IOMMU page table construction
When arch_iommu_populate_page_table() gets invoked for an already
running guest, simply looking at page types once isn't enough, as they
may change at any time. Add logic to re-check the type after having
mapped the page, unmapping it again if needed.
This is XSA-285.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Tentatively-Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 1f0b0bb7773d537bcf169e021495d0986d9809fc
master date: 2019-03-05 13:47:36 +0100
Jan Beulich [Tue, 5 Mar 2019 13:58:42 +0000 (14:58 +0100)]
gnttab: set page refcount for copy-on-grant-transfer
Commit 5cc77f9098 ("32-on-64: Fix domain address-size clamping,
implement"), which introduced this functionality, took care of clearing
the old page's PGC_allocated, but failed to set the bit (and install the
associated reference) on the newly allocated one. Furthermore the "mfn"
local variable was never updated, and hence the wrong MFN was passed to
guest_physmap_add_page() (and back to the destination domain) in this
case, leading to an IOMMU mapping into an unowned page.
Ideally the code would use assign_pages(), but the call to
gnttab_prepare_for_transfer() sits in the middle of the actions
mirroring that function.
This is XSA-284.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: George Dunlap <george.dunlap@citrix.com>
master commit: 6d4f36c3fecc0a6a0991716199612c81d909316e
master date: 2019-03-05 13:45:58 +0100
Wei Liu [Tue, 29 Jan 2019 11:37:59 +0000 (11:37 +0000)]
libxl: correctly dispose of dominfo list in libxl_name_to_domid
Tamas reported ssid_label was leaked. Use the designated function to
free dominfo list to fix the leakage.
Reported-by: Tamas K Lengyel <tamas@tklengyel.com> Signed-off-by: Wei Liu <wei.liu2@citrix.com> Tested-by: Tamas K Lengyel <tamas@tklengyel.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com> Release-acked-by: Juergen Gross <jgross@suse.com>
(cherry picked from commit f50dd67950ca9d5a517501af10de7c8d88d1a188)
Juergen Gross [Thu, 17 Jan 2019 16:40:59 +0000 (16:40 +0000)]
libxl: don't set gnttab limits in soft reset case
In case of soft reset the gnttab limit setting will fail, so omit it.
Setting of max vcpu count is pointless in this case, too, so we can
drop that as well.
Without this patch soft reset will fail with:
libxl: error: libxl_dom.c:363:libxl__build_pre: Couldn't set grant table limits
Reported-by: Jim Fehlig <jfehlig@suse.com> Signed-off-by: Juergen Gross <jgross@suse.com> Tested-by: Jim Fehlig <jfehlig@suse.com> Reviewed-by: Wei Liu <wei.liu2@citrix.com>
Andrew Cooper [Fri, 1 Feb 2019 10:36:17 +0000 (11:36 +0100)]
x86/hvm: Fix bit checking for CR4 and MSR_EFER
Before the cpuid_policy logic came along, %cr4/EFER auditing on migrate-in was
complicated, because at that point no CPUID information had been set for the
guest. Auditing against the host CPUID was better than nothing, but not
ideal.
Similarly at the time, PVHv1 lacked the "CPUID passed through from hardware"
behaviour with PV guests had, and PVH dom0 had to be special-cased to be able
to boot.
Order of information in the migration stream is still an issue (hence we still
need to keep the restore parameter to cope with a nested virt corner case for
%cr4), but since Xen 4.9, all domains start with a suitable CPUID policy,
which is a more appropriate upper bound than host_cpuid_policy.
Finally, reposition the UMIP logic as it is the only row out of order.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Wei Liu <wei.liu2@citrix.com>
master commit: 9d8c1d1814b744d0fb41085463db5d8ae025607e
master date: 2019-01-29 11:28:11 +0000
Jan Beulich [Fri, 1 Feb 2019 10:35:41 +0000 (11:35 +0100)]
x86/AMD: flush TLB after ucode update
The increased number of messages (spec_ctrl.c:print_details()) within a
certain time window made me notice some slowness of boot time screen
output. Experimentally I've narrowed the time window to be from
immediately after the early ucode update on the BSP to the PAT write in
cpu_init(), which upon further investigation has an effect because of
the full TLB flush that's implied by that write.
For that reason, as a workaround, flush the TLB of the mapping of the
page that holds the blob. Note that flushing just a single page is
sufficient: As per verify_patch_size() patch size can't exceed 4k, and
the way xmalloc() works the blob can't be crossing a page boundary.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Brian Woods <brian.woods@amd.com>
master commit: f19a199281a23725beb73bef61eb8964d8e225ce
master date: 2019-01-28 17:40:39 +0100
Andrew Cooper [Fri, 1 Feb 2019 10:34:35 +0000 (11:34 +0100)]
xen/cmdline: Fix buggy strncmp(s, LITERAL, ss - s) construct
When the command line parsing was updated to use const strings and no longer
tokenise with NUL characters, string matches could no longer be made with
strcmp().
Unfortunately, the replacement was buggy. strncmp(s, "opt", ss - s) matches
"o", "op" and "opt" on the command line, as ss - s may be shorter than the
passed literal. Furthermore, parse_bool() is affected by this, so substrings
such as "d", "e" and "o" are considered valid, with the latter being ambiguous
between "on" and "off".
Introduce a new strcmp-like function for the task, which looks for exact
string matches, but declares success when the NUL of the literal matches a
comma, colon or semicolon in the command line fragment.
No change to the intended parsing functionality, but fixes cases where a
partial string on the command line will inadvertently trigger options.
A few areas were more than just a trivial change:
* parse_irq_vector_map_param() gained some style corrections.
* parse_vpmu_params() was rewritten to use the normal list-of-options form,
rather than just fixing up parse_vpmu_param() and leaving the parsing being
hard to follow.
* Instead of making the trivial fix of adding an explicit length check in
parse_bool(), use the length to select which token to we search for, which
is more efficient than the previous linear search over all possible tokens.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Julien Grall <julien.grall@arm.com>
master commit: 2ddf7e3e341df3ccf21613ff7ffd4b7693abe9e9
master date: 2019-01-15 12:58:34 +0000
Sergey Dyasli [Fri, 1 Feb 2019 10:33:44 +0000 (11:33 +0100)]
mm/page_alloc: fix MEMF_no_dma allocations for single NUMA
Currently dma_bitsize is zero by default on single NUMA node machines.
This makes all alloc_domheap_pages() calls with MEMF_no_dma return NULL.
There is only 1 user of MEMF_no_dma: dom0_memflags, which are used
during memory allocation for Dom0. Failing allocation with default
dom0_memflags is especially severe for the PV Dom0 case: it makes
alloc_chunk() to use suboptimal 2MB allocation algorithm with a search
for higher memory addresses.
This can lead to the NMI watchdog timeout during PV Dom0 construction
on some machines, which can be worked around by specifying "dma_bits"
in Xen's cmdline manually.
Fix the issue by ignoring MEMF_no_dma in cases when dma_bitsize is zero,
which means there is no DMA zone. This shouldn't cause any issues for
Dom0 because alloc_heap_pages() will first use higher memory addresses
for satisfying memory allocation requests.
Jan Beulich [Fri, 1 Feb 2019 10:33:09 +0000 (11:33 +0100)]
x86emul: work around SandyBridge errata
There are a number of exception condition related errata on SandyBridge
CPUs, some of which are unexpected #UD (others, of no interest here, are
lack of mandated exceptions, or exceptions of unexpected type). Annotate
the one workaround we already have, and add two more.
Due to the exception recovery we have in place for stub invocations
these aren't security issues.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 0d4d9e8f55602415475e04a5dc8b4ad27845a7f9
master date: 2018-12-18 15:19:47 +0100
Jan Beulich [Fri, 1 Feb 2019 10:32:35 +0000 (11:32 +0100)]
x86emul: fix 3-operand IMUL
While commit 75066cd4ea ("x86emul: fix {,i}mul and {,i}div") indeed did
as its title says, it broke the 3-operand form by uniformly using AL/AX/
EAX/RAX as second source operand. Fix this and add tests covering both
cases.
Reported-by: Andrei Lutas <vlutas@bitdefender.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Tested-by: Razvan Cojocaru <rcojocaru@bitdefender.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 19232b378fab04997c0612e5c19e82c29b59d99e
master date: 2018-12-18 14:27:09 +0100
Andrew Cooper [Fri, 1 Feb 2019 10:32:03 +0000 (11:32 +0100)]
x86/hvm: Corrections to RDTSCP intercept handling
For both VT-x and SVM, the RDTSCP intercept will trigger if the pipeline
supports the instruction, but the guest may not have RDTSCP in its featureset.
Bring the vmexit handlers in line with the main emulator behaviour by
optionally handing back #UD.
Next on the AMD side, if RDTSCP actually ends up being intercepted on a debug
build or first-gen SVM hardware which lacks NRIP, we first update regs->rcx,
then call __get_instruction_length() asking for RDTSC. As the two
instructions are different (and indeed, different lengths!),
__get_instruction_length_from_list() fails and hands back a #GP fault.
This can demonstrated by putting a guest into tsc_mode="always emulate" and
executing an RDTSCP instruction:
(d1) --- Xen Test Framework ---
(d1) Environment: HVM 64bit (Long mode 4 levels)
(d1) Test rdtscp
(d1) TSC mode 1
(XEN) emulate.c:147:d1v0 __get_instruction_length: Mismatch between expected and actual instruction:
(XEN) emulate.c:152:d1v0 insn_index 8, opcode 0xf0031 modrm 0
(XEN) emulate.c:154:d1v0 rip 0x10475f, nextrip 0x104762, len 3
(XEN) SVM insn len emulation failed (1): d1v0 64bit @ 0008:0010475f -> 0f 01 f9 0f 31 5b 31 ff 31 c0 e9 c2 db ff ff 00
(d1) ******************************
(d1) PANIC: Unhandled exception at 0008:000000000010475f
(d1) Vec 13 #GP[0000]
(d1) ******************************
First, teach __get_instruction_length() to cope with RDTSCP, and improve
svm_vmexit_do_rdtsc() to ask for the correct instruction. Move the regs->rcx
adjustment into this function to ensure it gets done after we are done
potentially raising faults.
Reported-by: Paul Durrant <paul.durrant@citrix.com> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Brian Woods <brian.woods@amd.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
master commit: 3fd3fda9c26fc3c4f77250f795ed7ff9d38e2ec6
master date: 2018-12-17 16:28:03 +0000
Andrew Cooper [Fri, 1 Feb 2019 10:31:28 +0000 (11:31 +0100)]
x86/VT-x: Don't activate VMCS Shadowing outside of nested vmx mode
By default on capable hardware, SECONDARY_EXEC_ENABLE_VMCS_SHADOWING is
activated unilaterally. The VMCS Link pointer is initialised to ~0, but the
VMREAD/VMWRITE bitmap pointers are not.
This causes the 16bit IVT and Bios Data Area get interpreted as the read/write
permission bitmap for guests which blindly execute VMREAD/VMWRITE
instructions.
This is not a security issue because the VMCS Link pointer being ~0 causes
VMREAD/VMWRITE to complete with VMFailInvalid (rather than modifying a
potential shadow VMCS), and the contents of MFN 0 has already been determined
not to contain any interesting data because of L1TF's ability to read that 4k
frame.
Leave VMCS Shadowing disabled by default, and toggle it in
nvmx_{set,clear}_vmcs_pointer(). This isn't the most efficient course of
action, but it is the most simple way of leaving nested-virt working as it did
before.
While editing construct_vmcs(), collect all default secondary_exec_control
modifications together. The disabling of PML is latently buggy because it
happens after secondary_exec_control are written into the VMCS, although there
is an unconditional update later which writes the correct value into hardware.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 75ce36eb72cb93e8a3c9f60fd5e697067921d712
master date: 2018-12-10 16:24:08 +0000
Jan Beulich [Fri, 1 Feb 2019 10:30:55 +0000 (11:30 +0100)]
x86/shadow: don't enable shadow mode with too small a shadow allocation
We've had more than one report of host crashes after failed migration,
and in at least one case we've had a hint towards a too far shrunk
shadow allocation pool. Instead of just checking the pool for being
empty, check whether the pool is smaller than what
shadow_set_allocation() would minimally bump it to if it was invoked in
the first place.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Tim Deegan <tim@xen.org>
master commit: 2634b997afabfdc5a972e07e536dfbc6febb4385
master date: 2018-11-30 12:10:39 +0100
Jan Beulich [Fri, 1 Feb 2019 10:29:53 +0000 (11:29 +0100)]
ns16550/PCI: fix skipping of devices
Selecting between single/multiple BAR mode should happen after checking
whether to skip the present device, or else multi-BAR devices won't be
skipped correctly, due to port_idx getting set to zero in that case.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Wei Liu <wei.liu2@citrix.com>
master commit: c34fe0468acc61aca62422483c37a1309708f1cb
master date: 2018-11-30 12:07:33 +0100
Andrew Cooper [Fri, 1 Feb 2019 10:29:16 +0000 (11:29 +0100)]
x86/soft-reset: Drop gfn reference after calling get_gfn_query()
get_gfn_query() internally takes the p2m lock, and this error path leaves it
locked.
This wasn't included in XSA-277 because the error path can only be triggered
by a carefully timed phymap operation concurrent with the domain being paused
and the toolstack issuing DOMCTL_soft_reset.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: e7969e917cef276318f722a607985a2e896aeb94
master date: 2018-11-22 17:58:46 +0000
Andrew Cooper [Fri, 1 Feb 2019 10:28:45 +0000 (11:28 +0100)]
x86/mem-sharing: Don't leave the altp2m lock held when nominating a page
get_gfn_type_access() internally takes the p2m lock, and nothing ever unlocks
it. Switch to using the unlocked accessor instead.
This wasn't included in XSA-277 because neither mem-sharing nor altp2m are
supported.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Tamas K Lengyel <tamas@tklengyel.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: d6e02850d3b45c9658457214a749cc48097bdef4
master date: 2018-11-22 17:58:46 +0000
Jan Beulich [Fri, 1 Feb 2019 10:27:59 +0000 (11:27 +0100)]
x86/HVM: __hvm_copy() should not write to p2m_ioreq_server pages
Commit 3bdec530a5 ("x86/HVM: split page straddling emulated accesses in
more cases") introduced a hvm_copy_to_guest_linear() attempt before
falling back to hvmemul_linear_mmio_write(). This is wrong for the
p2m_ioreq_server special case. That change widened a pre-existing issue
though: Other writes to such pages also need to be failed (or forced
through emulation), in particular hypercall buffer writes.
Reported-by: Igor Druzhinin <igor.druzhinin@citrix.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Paul Durrant <paul.durrant@citrix.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: d7bff2bc003cd5fd8c618b70c62b8fcfd9cd187e
master date: 2018-11-15 16:42:25 +0100
Jan Beulich [Fri, 1 Feb 2019 10:25:52 +0000 (11:25 +0100)]
VMX: fix vmx_handle_eoi()
In commit 303066fdb1e ("VMX: fix interaction of APIC-V and Viridian
emulation") I screwed up: Instead of clearing SVI, other ISR bits
should be taken into account.
Introduce a new helper set_svi(), split out of vmx_process_isr(), and
use it also from vmx_handle_eoi().
Following the problems in vmx_intr_assist() (see the still present big
block of debugging code there) also warn (once) if EOI'd vector and
original SVI don't match.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Chao Gao <chao.gao@intel.com> Acked-by: Kevin Tian <kevin.tian@intel.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 45cb9a4123b5550eb1f84846fe5482acae1c13a3
master date: 2018-11-02 12:15:33 +0100
Julien Grall [Mon, 1 Oct 2018 16:42:27 +0000 (17:42 +0100)]
xen/arm: vgic-v3: Don't create empty re-distributor regions
At the moment, Xen is assuming the hardware domain will have the same
number of re-distributor regions as the host. However, as the
number of CPUs or the stride (e.g on GICv4) may be different we end up
exposing regions which does not contain any re-distributors.
When booting, Linux will go through all the re-distributor region to
check whether a property (e.g vPLIs) is available accross all the
re-distributors. This will result to a data abort on empty regions
because there are no underlying re-distributor.
So we need to limit the number of regions exposed to the hardware
domain. The code reworked to only expose the minimun number of regions
required by the hardware domain. It is assumed the regions will be
populated starting from the first one.
Lastly, rename vgic_v3_rdist_count to reflect the value return by the
helper.
Julien Grall [Mon, 1 Oct 2018 16:42:26 +0000 (17:42 +0100)]
xen/arm: vgic-v3: Delay the initialization of the domain information
A follow-up patch will require to know the number of vCPUs when
initializating the vGICv3 domain structure. However this information is
not available at domain creation. This is only known once
XEN_DOMCTL_max_vpus is called for that domain.
In order to get the max vCPUs around, delay the domain part of the vGIC
v3 initialization until the first vCPU of the domain is initialized.
xen/arm: check for multiboot nodes only under /chosen
Make sure to only look for multiboot compatible nodes only under
/chosen, not under any other paths (depth <= 3).
Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
[julien: Use sizeof(path) instead of len ] Reviewed-by: Julien Grall <julien.grall@arm.com>
(cherry picked from commit c32e3689c546305d4eae53e6ccf9c8b4e048c7df)
Julien Grall [Tue, 23 Oct 2018 18:17:07 +0000 (19:17 +0100)]
xen/arm: gic: Ensure ordering between read of INTACK and shared data
When an IPI is generated by a CPU, the pattern looks roughly like:
<write shared data>
dsb(sy);
<write to GIC to signal SGI>
On the receiving CPU we rely on the fact that, once we've taken the
interrupt, then the freshly written shared data must be visible to us.
Put another way, the CPU isn't going to speculate taking an interrupt.
Unfortunately, this assumption turns out to be broken.
Consider that CPUx wants to send an IPI to CPUy, which will cause CPUy
to read some shared_data. Before CPUx has done anything, a random
peripheral raises an IRQ to the GIC and the IRQ line on CPUy is raised.
CPUy then takes the IRQ and starts executing the entry code, heading
towards gic_handle_irq. Furthermore, let's assume that a bunch of the
previous interrupts handled by CPUy were SGIs, so the branch predictor
kicks in and speculates that irqnr will be <16 and we're likely to
head into handle_IPI. The prefetcher then grabs a speculative copy of
shared_data which contains a stale value.
Meanwhile, CPUx gets round to updating shared_data and asking the GIC
to send an SGI to CPUy. Internally, the GIC decides that the SGI is
more important than the peripheral interrupt (which hasn't yet been
ACKed) but doesn't need to do anything to CPUy, because the IRQ line
is already raised.
CPUy then reads the ACK register on the GIC, sees the SGI value which
confirms the branch prediction and we end up with a stale shared_data
value.
This patch fixes the problem by adding an smp_rmb() to the IPI entry
code in do_SGI.
Julien Grall [Tue, 23 Oct 2018 18:17:06 +0000 (19:17 +0100)]
xen/arm: gic: Ensure we have an ISB between ack and do_IRQ()
Devices that expose their interrupt status registers via system
registers (e.g. Statistical profiling, CPU PMU, DynamIQ PMU, arch timer,
vgic (although unused by Linux), ...) rely on a context synchronising
operation on the CPU to ensure that the updated status register is
visible to the CPU when handling the interrupt. This usually happens as
a result of taking the IRQ exception in the first place, but there are
two race scenarios where this isn't the case.
For example, let's say we have two peripherals (X and Y), where Y uses a
system register for its interrupt status.
Case 1:
1. CPU takes an IRQ exception as a result of X raising an interrupt
2. Y then raises its interrupt line, but the update to its system
register is not yet visible to the CPU
3. The GIC decides to expose Y's interrupt number first in the Ack
register
4. The CPU runs the IRQ handler for Y, but the status register is stale
Case 2:
1. CPU takes an IRQ exception as a result of X raising an interrupt
2. CPU reads the interrupt number for X from the Ack register and runs
its IRQ handler
3. Y raises its interrupt line and the Ack register is updated, but
again, the update to its system register is not yet visible to the
CPU.
4. Since the GIC drivers poll the Ack register, we read Y's interrupt
number and run its handler without a context synchronisation
operation, therefore seeing the stale register value.
In either case, we run the risk of missing an IRQ. This patch solves the
problem by ensuring that we execute an ISB in the GIC drivers prior
to invoking the interrupt handler.
Marc Zyngier [Tue, 25 Sep 2018 17:20:38 +0000 (18:20 +0100)]
xen/arm: smccc-1.1: Make return values unsigned long
An unfortunate consequence of having a strong typing for the input
values to the SMC call is that it also affects the type of the
return values, limiting r0 to 32 bits and r{1,2,3} to whatever
was passed as an input.
Let's turn everything into "unsigned long", which satisfies the
requirements of both architectures, and allows for the full
range of return values.
Andrew Cooper [Tue, 20 Nov 2018 14:35:48 +0000 (15:35 +0100)]
x86/dom0: Avoid using 1G superpages if shadowing may be necessary
The shadow code doesn't support 1G superpages, and will hand #PF[RSVD] back to
guests.
For dom0's with 512GB of RAM or more (and subject to the P2M alignment), Xen's
domain builder might use 1G superpages.
Avoid using 1G superpages (falling back to 2M superpages instead) if there is
a reasonable chance that we may have to shadow dom0. This assumes that there
are no circumstances where we will activate logdirty mode on dom0.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 96f6ee15ad7ca96472779fc5c083b4149495c584
master date: 2018-11-12 11:26:04 +0000
Jan Beulich [Tue, 20 Nov 2018 14:34:51 +0000 (15:34 +0100)]
x86/shadow: shrink struct page_info's shadow_flags to 16 bits
This is to avoid it overlapping the linear_pt_count field needed for PV
domains. Introduce a separate, HVM-only pagetable_dying field to replace
the sole one left in the upper 16 bits.
Note that the accesses to ->shadow_flags in shadow_{pro,de}mote() get
switched to non-atomic, non-bitops operations, as {test,set,clear}_bit()
are not allowed on uint16_t fields and hence their use would have
required ugly casts. This is fine because all updates of the field ought
to occur with the paging lock held, and other updates of it use |= and
&= as well (i.e. using atomic operations here didn't really guard
against potentially racing updates elsewhere).
This is part of XSA-280.
Reported-by: Prgmr.com Security <security@prgmr.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Tim Deegan <tim@xen.org>
master commit: 789589968ed90e82a832dbc60e958c76b787be7e
master date: 2018-11-20 14:59:54 +0100
Jan Beulich [Tue, 20 Nov 2018 14:34:13 +0000 (15:34 +0100)]
x86/shadow: move OOS flag bit positions
In preparation of reducing struct page_info's shadow_flags field to 16
bits, lower the bit positions used for SHF_out_of_sync and
SHF_oos_may_write.
Instead of also adjusting the open coded use in _get_page_type(),
introduce shadow_prepare_page_type_change() to contain knowledge of the
bit positions to shadow code.
This is part of XSA-280.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Tim Deegan <tim@xen.org>
master commit: d68e1070c3e8f4af7a31040f08bdd98e6d6eac1d
master date: 2018-11-20 14:59:13 +0100
Andrew Cooper [Tue, 20 Nov 2018 14:33:16 +0000 (15:33 +0100)]
x86/mm: Don't perform flush after failing to update a guests L1e
If the L1e update hasn't occured, the flush cannot do anything useful. This
skips the potentially expensive vcpumask_to_pcpumask() conversion, and
broadcast TLB shootdown.
More importantly however, we might be in the error path due to a bad va
parameter from the guest, and this should not propagate into the TLB flushing
logic. The INVPCID instruction for example raises #GP for a non-canonical
address.
This is XSA-279.
Reported-by: Matthew Daley <mattd@bugfuzz.com> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 6c8d50288722672ecc8e19b0741a31b521d01706
master date: 2018-11-20 14:58:41 +0100
Andrew Cooper [Tue, 20 Nov 2018 14:32:34 +0000 (15:32 +0100)]
x86/mm: Put the gfn on all paths after get_gfn_query()
c/s 7867181b2 "x86/PoD: correctly handle non-order-0 decrease-reservation
requests" introduced an early exit in guest_remove_page() for unexpected p2m
types. However, get_gfn_query() internally takes the p2m lock, and must be
matched with a put_gfn() call later.
Fix the erroneous comment beside the declaration of get_gfn_query().
This is XSA-277.
Reported-by: Paul Durrant <paul.durrant@citrix.com> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: d80988cfc04ee608bee722448e7c3bc8347ec04c
master date: 2018-11-20 14:58:10 +0100
Paul Durrant [Tue, 20 Nov 2018 14:31:48 +0000 (15:31 +0100)]
x86/hvm/ioreq: use ref-counted target-assigned shared pages
Passing MEMF_no_refcount to alloc_domheap_pages() will allocate, as
expected, a page that is assigned to the specified domain but is not
accounted for in tot_pages. Unfortunately there is no logic for tracking
such allocations and avoiding any adjustment to tot_pages when the page
is freed.
The only caller of alloc_domheap_pages() that passes MEMF_no_refcount is
hvm_alloc_ioreq_mfn() so this patch removes use of the flag from that
call-site to avoid the possibility of a domain using an ioreq server as
a means to adjust its tot_pages and hence allocate more memory than it
should be able to.
However, the reason for using the flag in the first place was to avoid
the allocation failing if the emulator domain is already at its maximum
memory limit. Hence this patch switches to allocating memory from the
target domain instead of the emulator domain. There is already an extra
memory allowance of 2MB (LIBXL_HVM_EXTRA_MEMORY) applied to HVM guests,
which is sufficient to cover the pages required by the supported
configuration of a single IOREQ server for QEMU. (Stub-domains do not,
so far, use resource mapping). It also also the case the QEMU will have
mapped the IOREQ server pages before the guest boots, hence it is not
possible for the guest to inflate its balloon to consume these pages.
Paul Durrant [Tue, 20 Nov 2018 14:31:14 +0000 (15:31 +0100)]
x86/hvm/ioreq: fix page referencing
The code does not take a page reference in hvm_alloc_ioreq_mfn(), only a
type reference. This can lead to a situation where a malicious domain with
XSM_DM_PRIV can engineer a sequence as follows:
- create IOREQ server: no pages as yet.
- acquire resource: page allocated, total 0.
- decrease reservation: -1 ref, total -1.
This will cause Xen to hit a BUG_ON() in free_domheap_pages().
This patch fixes the issue by changing the call to get_page_type() in
hvm_alloc_ioreq_mfn() to a call to get_page_and_type(). This change
in turn requires an extra put_page() in hvm_free_ioreq_mfn() in the case
that _PGC_allocated is still set (i.e. a decrease reservation has not
occurred) to avoid the page being leaked.
This is part of XSA-276.
Reported-by: Julien Grall <julien.grall@arm.com> Signed-off-by: Paul Durrant <paul.durrant@citrix.com> Signed-off-by: Jan Beulich <jbeulich@suse.com>
master commit: f6b6ae78679b363ff670a9c125077c436dabd608
master date: 2018-11-20 14:57:05 +0100
Jan Beulich [Tue, 20 Nov 2018 14:30:25 +0000 (15:30 +0100)]
AMD/IOMMU: suppress PTE merging after initial table creation
The logic is not fit for this purpose, so simply disable its use until
it can be fixed / replaced. Note that this re-enables merging for the
table creation case, which was disabled as a (perhaps unintended) side
effect of the earlier "amd/iommu: fix flush checks". It relies on no
page getting mapped more than once (with different properties) in this
process, as that would still be beyond what the merging logic can cope
with. But arch_iommu_populate_page_table() guarantees this afaict.
Roger Pau Monné [Tue, 20 Nov 2018 14:29:40 +0000 (15:29 +0100)]
amd/iommu: fix flush checks
Flush checking for AMD IOMMU didn't check whether the previous entry
was present, or whether the flags (writable/readable) changed in order
to decide whether a flush should be executed.
Fix this by taking the writable/readable/next-level fields into account,
together with the present bit.
Along these lines the flushing in amd_iommu_map_page() must not be
omitted for PV domains. The comment there was simply wrong: Mappings may
very well change, both their addresses and their permissions. Ultimately
this should honor iommu_dont_flush_iotlb, but to achieve this
amd_iommu_ops first needs to gain an .iotlb_flush hook.
Also make clear_iommu_pte_present() static, to demonstrate there's no
caller omitting the (subsequent) flush.
This is part of XSA-275.
Reported-by: Paul Durrant <paul.durrant@citrix.com> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Signed-off-by: Jan Beulich <jbeulich@suse.com>
master commit: 1a7ffe466cd057daaef245b0a1ab6b82588e4c01
master date: 2018-11-20 14:52:12 +0100
Olaf Hering [Mon, 18 Jun 2018 12:55:36 +0000 (14:55 +0200)]
stubdom/vtpm: fix memcmp in TPM_ChangeAuthAsymFinish
gcc8 spotted this error:
error: 'memcmp' reading 20 bytes from a region of size 8 [-Werror=stringop-overflow=]
Signed-off-by: Olaf Hering <olaf@aepfle.de> Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
(cherry picked from commit 22bf5be3237cb482a2ffd772ffd20ce37285eebf)
Jan Beulich [Wed, 7 Nov 2018 08:42:35 +0000 (09:42 +0100)]
x86: work around HLE host lockup erratum
XACQUIRE prefixed accesses to the 4Mb range of memory starting at 1Gb
are liable to lock up the processor. Disallow use of this memory range.
Unfortunately the available Core Gen7 and Gen8 spec updates are pretty
old, so I can only guess that they're similarly affected when Core Gen6
is and the Xeon counterparts are, too.
This is part of XSA-282.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: cc76410d20aff2cc07b268b0713dc1d2740c6e12
master date: 2018-11-07 09:33:24 +0100
Juergen Gross [Tue, 6 Nov 2018 10:54:38 +0000 (11:54 +0100)]
Release: add release note link to SUPPORT.md
In order to have a link to the release notes in the feature list
generated from SUPPORT.md add that link in the "Release Support"
section of that file.
The real link needs to be adapted when the version is being released.
Signed-off-by: Juergen Gross <jgross@suse.com> Acked-by: Wei Liu <wei.liu2@citrix.com>
Andrew Cooper [Mon, 5 Nov 2018 14:05:07 +0000 (15:05 +0100)]
x86/pv: Fix crash when using `xl set-parameter pcid=...`
"pcid=" is registered as a runtime parameter, which means that parse_pcid()
must not reside in .init, or the following happens when parse_params() tries
to call an unmapped function pointer.
In particular, initialising %dr6 with the value 0 is buggy, because on
hardware supporting Transactional Memory, it will cause the sticky RTM bit to
be asserted, even though a debug exception from a transaction hasn't actually
been observed.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Wei Liu <wei.liu2@citrix.com>
master commit: 46029da12e5efeca6d957e5793bd34f2965fa0a1
master date: 2018-10-24 14:43:05 +0100
In particular, initialising %dr6 with the value 0 is buggy, because on
hardware supporting Transactional Memory, it will cause the sticky RTM bit to
be asserted, even though a debug exception from a transaction hasn't actually
been observed.
Introduce arch_vcpu_regs_init() to set various architectural defaults, and
reuse this in the hvm_vcpu_reset_state() path.
Architecturally, %edx's init state contains the processors model information,
and 0xf looks to be a remnant of the old Intel processors. We clearly have no
software which cares, seeing as it is wrong for the last decade's worth of
Intel hardware and for all other vendors, so lets use the value 0 for
simplicity.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
x86/domain: Fix build with GCC 4.3.x
GCC 4.3.x can't initialise the user_regs structure like this.
Reported-by: Jan Beulich <JBeulich@suse.com> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Wei Liu <wei.liu2@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com>
master commit: dfba4d2e91f63a8f40493c4fc2db03fd8287f6cb
master date: 2018-10-24 14:43:05 +0100
master commit: 0a1fa635029d100d4b6b7eddb31d49603217cab7
master date: 2018-10-30 13:26:21 +0000
Andrew Cooper [Mon, 5 Nov 2018 14:02:59 +0000 (15:02 +0100)]
x86/boot: Initialise the debug registers correctly
In particular, initialising %dr6 with the value 0 is buggy, because on
hardware supporting Transactional Memory, it will cause the sticky RTM bit to
be asserted, even though a debug exception from a transaction hasn't actually
been observed.
Move X86_DR6_DEFAULT into x86-defns.h along with the other architectural
register constants, and introduce a new X86_DR7_DEFAULT. Use the existing
write_debugreg() helper, rather than opencoded inline assembly.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
master commit: 721da6d41a70fe08b3fcd9c31a62f6709a54c6ba
master date: 2018-10-24 14:43:05 +0100
Jan Beulich [Mon, 5 Nov 2018 14:01:20 +0000 (15:01 +0100)]
x86: fix "xpti=" and "pv-l1tf=" yet again
While commit 2a3b34ec47 ("x86/spec-ctrl: Yet more fixes for xpti=
parsing") indeed fixed "xpti=dom0", it broke "xpti=no-dom0", in that
this then became equivalent to "xpti=no". In particular, the presence
of "xpti=" alone on the command line means nothing as to which default
is to be overridden; "xpti=no-dom0", for example, ought to have no
effect for DomU-s, as this is distinct from both "xpti=no-dom0,domu"
and "xpti=no-dom0,no-domu".
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 8743d2dea539617e237c77556a91dc357098a8af
master date: 2018-10-04 14:49:56 +0200
Jan Beulich [Mon, 5 Nov 2018 13:59:06 +0000 (14:59 +0100)]
x86: silence false log messages for plain "xpti" / "pv-l1tf"
While commit 2a3b34ec47 ("x86/spec-ctrl: Yet more fixes for xpti=
parsing") claimed to have got rid of the 'parameter "xpti" has invalid
value "", rc=-22!' log message for "xpti" alone on the command line,
this wasn't the case (the option took effect nevertheless).
Fix this there as well as for plain "pv-l1tf".
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 2fb57e4beefeda923446b73f88b392e59b07d847
master date: 2018-09-28 17:12:14 +0200
Andrew Cooper [Wed, 10 Oct 2018 09:17:15 +0000 (09:17 +0000)]
x86/vvmx: Disallow the use of VT-x instructions when nested virt is disabled
c/s ac6a4500b "vvmx: set vmxon_region_pa of vcpu out of VMX operation to an
invalid address" was a real bugfix as described, but has a very subtle bug
which results in all VT-x instructions being usable by a guest.
The toolstack constructs a guest by issuing:
XEN_DOMCTL_createdomain
XEN_DOMCTL_max_vcpus
and optionally later, HVMOP_set_param to enable nested virt.
As a result, the call to nvmx_vcpu_initialise() in hvm_vcpu_initialise()
(which is what makes the above patch look correct during review) is actually
dead code. In practice, nvmx_vcpu_initialise() first gets called when nested
virt is enabled, which is typically never.
As a result, the zeroed memory of struct vcpu causes nvmx_vcpu_in_vmx() to
return true before nested virt is enabled for the guest.
Fixing the order of initialisation is a work in progress for other reasons,
but not viable for security backports.
A compounding factor is that the vmexit handlers for all instructions, other
than VMXON, pass 0 into vmx_inst_check_privilege()'s vmxop_check parameter,
which skips the CR4.VMXE check. (This is one of many reasons why nested virt
isn't a supported feature yet.)
However, the overall result is that when nested virt is not enabled by the
toolstack (i.e. the default configuration for all production guests), the VT-x
instructions (other than VMXON) are actually usable, and Xen very quickly
falls over the fact that the nvmx structure is uninitalised.
In order to fail safe in the supported case, reimplement all the VT-x
instruction handling using a single function with a common prologue, covering
all the checks which should cause #UD or #GP faults. This deliberately
doesn't use any state from the nvmx structure, in case there are other lurking
issues.
This is XSA-278
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Sergey Dyasli <sergey.dyasli@citrix.com>
Ian Jackson [Tue, 18 Sep 2018 10:25:20 +0000 (11:25 +0100)]
stubdom/grub.patches: Drop docs changes, for licensing reasons
The patch file 00cvs is an import of a new upstream version of
grub1 from upstream CVS.
Unfortunately, in the period covered by the update, upstream changed
the documentation licence from a simple permissive licence, to the GNU
"Free Documentation Licence" with Front and Back Cover Texts.
The Debian Project is of the view that use the Front and Back Cover
Texts feature of the GFDL makes the resulting document not Free
Software, because of the mandatory redistribution of these immutable
texts. (Personally, I agree.)
This is awkward because Debian do not want to ship non-free content.
So the Debian maintainers need to launder the upstream source code, to
remove the troublesome files. This is an extra step when
incorporating new upstream versions. It's particularly annoying for
security response, which often involves rebasing onto a new upstream
release.
grub1 is obsolete and the last change to Xen's PV grub1 stubdom code
was in 2016. Furthermore, the grub1 documentation is not built and
installed by the Xen pv-grub stubdom Makefiles.
Therefore, remove all docs changes from stubdom/grub.patches. This
means that there are now no longer any GFDL-licenced grub docs in
xen.git.
There is no user impact, and Debian is helped. This change would
complicate any attempts to update to a new version of upstream grub1,
but it seems unlikely that such a thing will ever happen.
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com> CC: Doug Goldstein <cardoe@cardoe.com> CC: Juergen Gross <jgross@suse.com> CC: pkg-xen-devel@lists.alioth.debian.org Acked-by: George Dunlap <george.dunlap@citrix.com> Acked-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
(cherry picked from commit c62c53d61477dfeb63a47b0673c389082112babc)
Wei Liu [Mon, 20 Aug 2018 08:38:18 +0000 (09:38 +0100)]
tools/tests: fix an xs-test.c issue
The ret variable can be used uninitialised when iters is 0. Initialise
ret at the beginning to fix this issue.
Reported-by: Steven Haigh <netwiz@crc.id.au> Signed-off-by: Wei Liu <wei.liu2@citrix.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(cherry picked from commit 3a2b8525b883baa87fe89b3da58f5c09fa599b99)
Daniel Kiper [Mon, 8 Oct 2018 12:28:55 +0000 (14:28 +0200)]
x86/boot: Allocate one extra module slot for Xen image placement
Commit 9589927 (x86/mb2: avoid Xen image when looking for
module/crashkernel position) fixed relocation issues for
Multiboot2 protocol. Unfortunately it missed to allocate
module slot for Xen image placement in early boot path.
So, let's fix it right now.
Reported-by: Wei Liu <wei.liu2@citrix.com> Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 4c5f9dbebc0bd2afee1ecd936c74ffe65756950f
master date: 2018-09-27 11:17:47 +0100
Dario Faggioli [Mon, 8 Oct 2018 12:28:25 +0000 (14:28 +0200)]
xen: sched/Credit2: fix bug when moving CPUs between two Credit2 cpupools
Whether or not a CPU is assigned to a runqueue (and, if yes, to which
one) within a Credit2 scheduler instance must be both a per-cpu and
per-scheduler instance one.
In fact, when we move a CPU between cpupools, we first setup its per-cpu
data in the new pool, and then cleanup its per-cpu data from the old
pool. In Credit2, when there currently is no per-scheduler, per-cpu
data (as the cpu-to-runqueue map is stored on a per-cpu basis only),
this means that the cleanup of the old per-cpu data can mess with the
new per-cpu data, leading to crashes like this:
Basically, when csched2_deinit_pdata() is called for CPU 13, for fully
removing the CPU from Pool-0, per_cpu(13,runq_map) already contain the
id of the runqueue to which the CPU has been assigned in the scheduler
of Pool-1, which means wrong runqueue manipulations happen in Pool-0's
scheduler. Furthermore, at the end of such call, that same runq_map is
updated with -1, which is what causes the BUG_ON in csched2_schedule(),
on CPU 13, to trigger.
So, instead of reverting a2c4e5ab59d "xen: credit2: make the cpu to
runqueue map per-cpu" (as we don't want to go back to having the huge
array in struct csched2_private) add a per-cpu scheduler specific data
structure, like, for instance, Credit1 has already. That (for now) only
contains one field: the id of the runqueue the CPU is assigned to.
Paul Durrant [Mon, 8 Oct 2018 12:27:48 +0000 (14:27 +0200)]
x86/hvm/emulate: make sure rep I/O emulation does not cross GFN boundaries
When emulating a rep I/O operation it is possible that the ioreq will
describe a single operation that spans multiple GFNs. This is fine as long
as all those GFNs fall within an MMIO region covered by a single device
model, but unfortunately the higher levels of the emulation code do not
guarantee that. This is something that should almost certainly be fixed,
but in the meantime this patch makes sure that MMIO is truncated at GFN
boundaries and hence the appropriate device model is re-evaluated for each
target GFN.
NOTE: This patch does not deal with the case of a single MMIO operation
spanning a GFN boundary. That is more complex to deal with and is
deferred to a subsequent patch.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Convert calculations to be 32-bit only.
Roger Pau Monné [Mon, 8 Oct 2018 12:26:28 +0000 (14:26 +0200)]
x86/efi: move the logic to detect PE build support
So that it can be used by other components apart from the efi specific
code. By moving the detection code creating a dummy efi/disabled file
can be avoided.
This is required so that the conditional used to define the efi symbol
in the linker script can be removed and instead the definition of the
efi symbol can be guarded using the preprocessor.
The motivation behind this change is to be able to build Xen using lld
(the LLVM linker), that at least on version 6.0.0 doesn't work
properly with a DEFINED being used in a conditional expression:
ld -melf_x86_64_fbsd -T xen.lds -N prelink.o --build-id=sha1 \
/root/src/xen/xen/common/symbols-dummy.o -o /root/src/xen/xen/.xen-syms.0
ld: error: xen.lds:233: symbol not found: efi
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Tested-by: Daniel Kiper <daniel.kiper@oracle.com>
master commit: 18cd4997d26b9df95dda87503e41c823279a07a0
master date: 2018-07-31 10:24:22 +0200
Ross Lagerwall [Mon, 8 Oct 2018 12:22:34 +0000 (14:22 +0200)]
x86/shutdown: use ACPI reboot method for Dell PowerEdge R540
When EFI booting the Dell PowerEdge R540 it consistently wanders into
the weeds and gets an invalid opcode in the EFI ResetSystem call. This
is the same bug which affects the PowerEdge R740 so fix it in the same
way: quirk this hardware to use the ACPI reboot method instead.
BIOS Information
Vendor: Dell Inc.
Version: 1.3.7
Release Date: 02/09/2018
System Information
Manufacturer: Dell Inc.
Product Name: PowerEdge R540
Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com>
master commit: 328ca55b7bd47e1324b75cce2a6c461308ecf93d
master date: 2018-06-28 09:29:13 +0200
Jan Beulich [Fri, 14 Sep 2018 11:05:52 +0000 (13:05 +0200)]
x86: assorted array_index_nospec() insertions
Don't chance having Spectre v1 (including BCBS) gadgets. In some of the
cases the insertions are more of precautionary nature rather than there
provably being a gadget, but I think we should err on the safe (secure)
side here.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Paul Durrant <paul.durrant@citrix.com> Acked-by: Razvan Cojocaru <rcojocaru@bitdefender.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 3f2002614af51dfd507168a1696658bac91155ce
master date: 2018-09-03 17:50:10 +0200
Jan Beulich [Fri, 14 Sep 2018 11:04:44 +0000 (13:04 +0200)]
rangeset: make inquiry functions tolerate NULL inputs
Rather than special casing the ->iomem_caps check in x86's
get_page_from_l1e() for the dom_xen case, let's be more tolerant in
general, along the lines of rangeset_is_empty(): A never allocated
rangeset can't possibly contain or overlap any range.
Reported-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Wei Liu <wei.liu2@citrix.com>
master commit: ad0a9f273d6d6f0545cd9b708b2d4be581a6cadd
master date: 2018-08-17 13:54:40 +0200
Andrew Cooper [Fri, 14 Sep 2018 11:04:07 +0000 (13:04 +0200)]
x86/setup: Avoid OoB E820 lookup when calculating the L1TF safe address
A number of corner cases (most obviously, no-real-mode and no Multiboot memory
map) can end up with e820_raw.nr_map being 0, at which point the L1TF
calculation will underflow.
Spotted by Coverity.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Wei Liu <wei.liu2@citrix.com>
master commit: 3e4ec07e14bce81f6ae22c31ff1302d1f297a226
master date: 2018-08-16 18:10:07 +0100
Paul Durrant [Fri, 14 Sep 2018 11:03:38 +0000 (13:03 +0200)]
x86/hvm/ioreq: MMIO range checking completely ignores direction flag
hvm_select_ioreq_server() is used to route an ioreq to the appropriate
ioreq server. For MMIO this is done by comparing the range of the ioreq
to the ranges registered by the device models of each ioreq server.
Unfortunately the calculation of the range if the ioreq completely ignores
the direction flag and thus may calculate the wrong range for comparison.
Thus the ioreq may either be routed to the wrong server or erroneously
terminated by null_ops.
NOTE: The patch also fixes whitespace in the switch statement to make it
style compliant.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 60a56dc0064a00830663ffe48215dcd080cb9504
master date: 2018-08-15 14:14:06 +0200
Andrew Cooper [Fri, 14 Sep 2018 11:02:46 +0000 (13:02 +0200)]
x86/vlapic: Bugfixes and improvements to vlapic_{read,write}()
Firstly, there is no 'offset' boundary check on the non-32-bit write path
before the call to vlapic_read_aligned(), which allows an attacker to read
beyond the end of vlapic->regs->data[], which is only 1024 bytes long.
However, as the backing memory is a domheap page, and misaligned accesses get
chunked down to single bytes across page boundaries, I can't spot any
XSA-worthy problems which occur from the overrun.
On real hardware, bad accesses don't instantly crash the machine. Their
behaviour is undefined, but the domain_crash() prohibits sensible testing.
Behave more like other x86 MMIO and terminate bad accesses with appropriate
defaults.
While making these changes, clean up and simplify the the smaller-access
handling. In particular, avoid pointer based mechansims for 1/2-byte reads so
as to avoid forcing the value to be spilled to the stack.
add/remove: 0/0 grow/shrink: 0/2 up/down: 0/-175 (-175)
function old new delta
vlapic_read 211 142 -69
vlapic_write 304 198 -106
Finally, there are a plethora of read/write functions in the vlapic namespace,
so rename these to vlapic_mmio_{read,write}() to make their purpose more
clear.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
master commit: b6f43c14cef3af8477a9eca4efab87dd150a2885
master date: 2018-08-10 13:27:24 +0100
Wei Liu [Tue, 7 Aug 2018 14:35:34 +0000 (15:35 +0100)]
xl.conf: Add global affinity masks
XSA-273 involves one hyperthread being able to use Spectre-like
techniques to "spy" on another thread. The details are somewhat
complicated, but the upshot is that after all Xen-based mitigations
have been applied:
* PV guests cannot spy on sibling threads
* HVM guests can spy on sibling threads
(NB that for purposes of this vulnerability, PVH and HVM guests are
identical. Whenever this comment refers to 'HVM', this includes PVH.)
There are many possible mitigations to this, including disabling
hyperthreading entirely. But another solution would be:
* Specify some cores as PV-only, others as PV or HVM
* Allow HVM guests to only run on thread 0 of the "HVM-or-PV" cores
* Allow PV guests to run on the above cores, as well as any thread of the PV-only cores.
For example, suppose you had 16 threads across 8 cores (0-7). You
could specify 0-3 as PV-only, and 4-7 as HVM-or-PV. Then you'd set
the affinity of the HVM guests as follows (binary representation):
In order to make this easy, this patches introduces three "global affinity
masks", placed in xl.conf:
vm.cpumask
vm.hvm.cpumask
vm.pv.cpumask
These are parsed just like the 'cpus' and 'cpus_soft' options in the
per-domain xl configuration files. The resulting mask is AND-ed with
whatever mask results at the end of the xl configuration file.
`vm.cpumask` would be applied to all guest types, `vm.hvm.cpumask`
would be applied to HVM and PVH guest types, and `vm.pv.cpumask`
would be applied to PV guest types.
The idea would be that to implement the above mask across all your
VMs, you'd simply add the following two lines to the configuration
file:
Signed-off-by: George Dunlap <george.dunlap@citrix.com> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
(cherry picked from commit aa67b97ed34279c43a43d9ca46727b5746caa92e)
Jan Beulich [Mon, 13 Aug 2018 11:07:23 +0000 (05:07 -0600)]
x86: Make "spec-ctrl=no" a global disable of all mitigations
In order to have a simple and easy to remember means to suppress all the
more or less recent workarounds for hardware vulnerabilities, force
settings not controlled by "spec-ctrl=" also to their original defaults,
unless they've been forced to specific values already by earlier command
line options.
This is part of XSA-273.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
(cherry picked from commit d8800a82c3840b06b17672eddee4878bbfdacc6d)
Andrew Cooper [Tue, 29 May 2018 17:44:16 +0000 (18:44 +0100)]
x86/spec-ctrl: Introduce an option to control L1D_FLUSH for HVM HAP guests
This mitigation requires up-to-date microcode, and is enabled by default on
affected hardware if available, and is used for HVM guests
The default for SMT/Hyperthreading is far more complicated to reason about,
not least because we don't know if the user is going to want to run any HVM
guests to begin with. If a explicit default isn't given, nag the user to
perform a risk assessment and choose an explicit default, and leave other
configuration to the toolstack.
This is part of XSA-273 / CVE-2018-3620.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit 3bd36952dab60290f33d6791070b57920e10754b)
Andrew Cooper [Fri, 13 Apr 2018 15:34:01 +0000 (15:34 +0000)]
x86/msr: Virtualise MSR_FLUSH_CMD for guests
Guests (outside of the nested virt case, which isn't supported yet) don't need
L1D_FLUSH for their L1TF mitigations, but offering/emulating MSR_FLUSH_CMD is
easy and doesn't pose an issue for Xen.
The MSR is offered to HVM guests only. PV guests attempting to use it would
trap for emulation, and the L1D cache would fill long before the return to
guest context. As such, PV guests can't make any use of the L1D_FLUSH
functionality.
This is part of XSA-273 / CVE-2018-3646.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit fd9823faf9df057a69a9a53c2e100691d3f4267c)
Andrew Cooper [Wed, 28 Mar 2018 14:21:39 +0000 (15:21 +0100)]
x86/spec-ctrl: CPUID/MSR definitions for L1D_FLUSH
This is part of XSA-273 / CVE-2018-3646.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit 3563fc2b2731a63fd7e8372ab0f5cef205bf8477)