Juergen Gross [Wed, 15 Feb 2017 11:11:12 +0000 (12:11 +0100)]
libxl: correct xenstore entry for empty cdrom
Specifying an empty cdrom device will result in a Xenstore entry
params = aio:(null)
as the physical device path isn't existing. This lets a domain booted
via OVMF hang as OVMF is checking for "aio:" only in order to detect
the empty cdrom case.
Use an empty string for the physical device path in this case. As a
cdrom device for HVM is always backed by qdisk we only need to cover this
backend.
Signed-off-by: Juergen Gross <jgross@suse.com> Acked-by: Wei Liu <wei.liu2@citrix.com>
Jan Beulich [Tue, 4 Apr 2017 12:55:00 +0000 (14:55 +0200)]
memory: properly check guest memory ranges in XENMEM_exchange handling
The use of guest_handle_okay() here (as introduced by the XSA-29 fix)
is insufficient here, guest_handle_subrange_okay() needs to be used
instead.
Note that the uses are okay in
- XENMEM_add_to_physmap_batch handling due to the size field being only
16 bits wide,
- livepatch_list() due to the limit of 1024 enforced on the
number-of-entries input (leaving aside the fact that this can be
called by a privileged domain only anyway),
- compat mode handling due to counts there being limited to 32 bits,
- everywhere else due to guest arrays being accessed sequentially from
index zero.
This is CVE-2017-7228 / XSA-212.
Reported-by: Jann Horn <jannh@google.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 938fd2586eb081bcbd694f4c1f09ae6a263b0d90
master date: 2017-04-04 14:47:46 +0200
Dario Faggioli [Fri, 31 Mar 2017 06:33:20 +0000 (08:33 +0200)]
xen: sched: don't call hooks of the wrong scheduler via VCPU2OP
Within context_saved(), we call the context_saved hook,
and we use VCPU2OP() to determine from what scheduler.
VCPU2OP uses DOM2OP, which uses d->cpupool, which is
NULL when d is the idle domain. And in that case,
DOM2OP just returns ops, the scheduler of cpupool0.
Therefore, if:
- cpupool0's scheduler defines context_saved (like
Credit2 and RTDS do),
- we are not in cpupool0 (i.e., our scheduler is
not ops),
- we are context switching from idle,
we call VCPU2OP(idle_vcpu), which means
DOM2OP(idle->cpupool), which is ops.
Therefore, we both:
- check if context_saved is defined in the wrong
scheduler;
- if yes, call the wrong one.
When using Credit2 at boot, and also Credit2 in
the other cpupool, this is wrong but innocuous,
because it only involves the idle vcpus.
When using Credit2 at boot, and Credit1 in the
other cpupool, this is *totally* wrong, and
it's by chance it does not explode!
When using Credit2 and other schedulers I'm
developping, I hit the following assert (in
sched_credit2.c, on a CPU inside a cpupool that
does not use Credit2):
Jan Beulich [Fri, 31 Mar 2017 06:32:51 +0000 (08:32 +0200)]
x86/EFI: avoid Xen image when looking for module/kexec position
When booting straight from EFI, we don't further try to relocate Xen.
As a result, so far we also didn't avoid the area Xen uses when looking
for a location to put modules or the kexec area. Introduce a fake
module slot to deal with that without having to fiddle with a lot of
code.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: e22e1c47958a4778cd7baa3980f74e52f525ba28
master date: 2017-03-20 09:27:12 +0100
Jan Beulich [Fri, 31 Mar 2017 06:32:22 +0000 (08:32 +0200)]
x86/EFI: avoid IOMMU faults on [_end,__2M_rwdata_end)
Commit c9a4a1c419 ("x86/layout: Correct Xen's idea of its own memory
layout") didn't go far enough with the conversion, causing IOMMU faults
when memory from that range was handed to a domain. We must not make
this memory available for allocation (the change is benign to xen.gz at
this point in time).
Note that the change to tboot_shutdown() is fixing another issue at
once: As it looks, the function so far skipped all memory below the Xen
image.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: d522571a408a7dd21a06705f6dd51cdafd2db4fc
master date: 2017-03-20 09:25:36 +0100
Roger Pau Monné [Fri, 31 Mar 2017 06:31:14 +0000 (08:31 +0200)]
build/clang: fix XSM dummy policy when using clang 4.0
There seems to be some weird bug in clang 4.0 that prevents xsm_pmu_op from
working as expected, and vpmu.o ends up with a reference to
__xsm_action_mismatch_detected which makes the build fail:
[...]
ld -melf_x86_64_fbsd -T xen.lds -N prelink.o \
xen/common/symbols-dummy.o -o xen/.xen-syms.0
prelink.o: In function `xsm_default_action':
xen/include/xsm/dummy.h:80: undefined reference to `__xsm_action_mismatch_detected'
xen/xen/include/xsm/dummy.h:80: relocation truncated to fit: R_X86_64_PC32 against undefined symbol `__xsm_action_mismatch_detected'
ld: xen/xen/.xen-syms.0: hidden symbol `__xsm_action_mismatch_detected' isn't defined
The current patch is the only way I've found to fix this so far, by simply
moving the XSM_PRIV check into the default case in xsm_pmu_op. This also fixes
the behavior of do_xenpmu_op, which will now return -EINVAL for unknown
XENPMU_* operations, instead of -EPERM when called by a privileged domain.
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
master commit: 9e4d116faff4545a7f21c2b01008e94d68e6db58
master date: 2017-03-14 18:19:29 +0100
Roger Pau Monné [Fri, 31 Mar 2017 06:28:49 +0000 (08:28 +0200)]
x86: drop unneeded __packed attributes
There where a couple of unneeded packed attributes in several x86-specific
structures, that are obviously aligned. The only non-trivial one is
vmcb_struct, which has been checked to have the same layout with and without
the packed attribute using pahole. In that case add a build-time size check to
be on the safe side.
No functional change is expected as a result of this commit.
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
master commit: 4036e7c592905c2292cdeba8269e969959427237
master date: 2017-03-07 17:11:06 +0100
This panic was triggered by the BUG(); in branch_insn_requires_update.
That's because in this case the alternative patching needs to update the
offset of the branch instruction. But the new target address of the branch
instruction could not pass the check of is_active_kernel_text();
The reason is that: When Xen is booting, it will call apply_alternatives_all
to do patching with alternative tables. In this progress, we should update
the offset of branch instructions if required. This means we should modify
the Xen text section. But Xen text section is marked as read-only and we
configure the hardware to not allow a region to be writable and executable at
the same time. So we re-map Xen in a temporary area for writing. In this case,
the calculation of the new target address of the branch instruction is based
on this re-mapped area. The new target address will point to a value in the
re-mapped area. But we haven't registered this area as an active kernel text.
So the check of is_active_kernel_text will always return false.
We have to register the re-mapped Xen area as a virtual region temporarily to
solve this problem.
We don't need a lock in vgic_get_target_vcpu anymore, solving the
following lock inversion bug: the rank lock should be taken first, then
the vgic lock. However, gic_update_one_lr is called with the vgic lock
held, and it calls vgic_get_target_vcpu, which tries to obtain the rank
lock.
Julien Grall [Wed, 8 Mar 2017 18:06:02 +0000 (18:06 +0000)]
xen/arm: p2m: Perform local TLB invalidation on vCPU migration
The ARM architecture allows an OS to have per-CPU page tables, as it
guarantees that TLBs never migrate from one CPU to another.
This works fine until this is done in a guest. Consider the following
scenario:
- vcpu-0 maps P to V
- vpcu-1 maps P' to V
If run on the same physical CPU, vcpu-1 can hit in TLBs generated by
vcpu-0 accesses, and access the wrong physical page.
The solution to this is to keep a per-p2m map of which vCPU ran the last
on each given pCPU and invalidate local TLBs if two vPCUs from the same
VM run on the same CPU.
Unfortunately it is not possible to allocate per-cpu variable on the
fly. So for now the size of the array is NR_CPUS, this is fine because
we still have space in the structure domain. We may want to add an
helper to allocate per-cpu variable in the future.
xen/arm: acpi: Relax hw domain mapping attributes to p2m_mmio_direct_c
Since the hardware domain is a trusted domain, we extend the
trust to include making final decisions on what attributes to
use when mapping memory regions.
For ACPI configured hardware domains, this patch relaxes the hardware
domains mapping attributes to p2m_mmio_direct_c. This will allow the
hardware domain to control the attributes via its S1 mappings.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Acked-by: Julien Grall <julien.grall@arm.com> Acked-by: Stefano Stabellini <sstabellini@kernel.org>
xen/arm: dt: Relax hw domain mapping attributes to p2m_mmio_direct_c
Since the hardware domain is a trusted domain, we extend the
trust to include making final decisions on what attributes to
use when mapping memory regions.
For device-tree configured hardware domains, this patch relaxes
the hardware domains mapping attributes to p2m_mmio_direct_c.
This will allow the hardware domain to control the attributes
via its S1 mappings.
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Reviewed-by: Julien Grall <julien.grall@arm.com> Acked-by: Stefano Stabellini <sstabellini@kernel.org>
Tamas K Lengyel [Fri, 27 Jan 2017 18:25:23 +0000 (11:25 -0700)]
xen/arm: flush icache as well when XEN_DOMCTL_cacheflush is issued
When the toolstack modifies memory of a running ARM VM it may happen
that the underlying memory of a current vCPU PC is changed. Without
flushing the icache the vCPU may continue executing stale instructions.
Also expose the xc_domain_cacheflush through xenctrl.h.
Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com> Acked-by: Wei Liu <wei.liu2@citrix.com> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Julien Grall [Mon, 5 Dec 2016 17:43:23 +0000 (17:43 +0000)]
xen/arm: traps: Emulate ICC_SRE_EL1 as RAZ/WI
Recent Linux kernel (4.4 and onwards [1]) is checking whether it is possible
to enable sysreg access (ICC_SRE_EL1.SRE) when the ID register
(ID_AA64PRF0_EL1.GIC) is reporting the presence of the sysreg interface.
When the guest has been configured to use GICv2, the hypervisor will
disable sysreg access for this vm (via ICC_SRE_EL2.Enable) and therefore
access to system register such as ICC_SRE_EL1 are trapped in EL2.
However, ICC_SRE_EL1 is not emulated by the hypervisor. This means that
Linux will crash as soon as it is trying to access ICC_SRE_EL1.
To solve this problem, Xen can implement ICC_SRE_EL1 as read-as-zero
write-ignore. The emulation will only be used when sysreg are disabled
for EL1.
[1] 963fcd409 "arm64: cpufeatures: Check ICC_EL1_SRE.SRE before
enabling ARM64_HAS_SYSREG_GIC_CPUIF"
arm/irq: Reorder check when the IRQ is already used by someone
Call irq_get_domain for the IRQ we are interested in
only after making sure that it is the guest IRQ to avoid
ASSERT(test_bit(_IRQ_GUEST, &desc->status)) triggering.
Jun Sun [Mon, 10 Oct 2016 19:27:56 +0000 (12:27 -0700)]
Don't clear HCR_VM bit when updating VTTBR.
Currently function p2m_restore_state() would clear HCR_VM bit, i.e.,
disabling stage2 translation, before updating VTTBR register. After
some research and talking to ARM support, I got confirmed that this is not
necessary. We are currently working on a new platform that would need this
to be removed.
The patch is tested on FVP foundation model.
Signed-off-by: Jun Sun <jsun@junsun.net> Acked-by: Steve Capper <steve.capper@linaro.org> Acked-by: Stefano Stabellini <sstabellini@kernel.org>
Dario Faggioli [Tue, 14 Mar 2017 11:42:19 +0000 (12:42 +0100)]
xen: credit2: don't miss accounting while doing a credit reset.
A credit reset basically means going through all the
vCPUs of a runqueue and altering their credits, as a
consequence of a 'scheduling epoch' having come to an
end.
Blocked or runnable vCPUs are fine, all the credits
they've spent running so far have been accounted to
them when they were scheduled out.
But if a vCPU is running on a pCPU, when a reset event
occurs (on another pCPU), that does not get properly
accounted. Let's therefore begin to do so, for better
accuracy and fairness.
In fact, after this patch, we see this in a trace:
Which shows how d1v5 actually executed for ~9.796 ms,
on pCPU 10, when reset_credit() is executed, on pCPU
12, because of d1v6's credits going below 0.
Without this patch, this 9.796ms are not accounted
to anyone. With this patch, d1v5 is charged for that,
and its credits drop down from 9796548 to 201805.
And this is important, as it means that it will
begin the new epoch with 10201805 credits, instead
of 10500000 (which he would have, before this patch).
Basically, we were forgetting one round of accounting
in epoch x, for the vCPUs that are running at the time
the epoch ends. And this meant favouring a little bit
these same vCPUs, in epoch x+1, providing them with
the chance of execute longer than their fair share.
Dario Faggioli [Tue, 14 Mar 2017 11:41:54 +0000 (12:41 +0100)]
xen: credit2: always mark a tickled pCPU as... tickled!
In fact, whether or not a pCPU has been tickled, and is
therefore about to re-schedule, is something we look at
and base decisions on in various places.
So, let's make sure that we do that basing on accurate
information.
While there, also tweak a little bit smt_idle_mask_clear()
(used for implementing SMT support), so that it only alter
the relevant cpumask when there is the actual need for this.
(This is only for reduced overhead, behavior remains the
same).
Andrew Cooper [Tue, 14 Mar 2017 11:41:21 +0000 (12:41 +0100)]
x86/layout: Correct Xen's idea of its own memory layout
c/s b4cd59fe "x86: reorder .data and .init when linking" had an unintended
side effect, where xen_in_range() and the tboot S3 MAC were no longer correct.
In practice, it means that Xen's .data section is excluded from consideration,
which means:
1) Default IOMMU construction for the hardware domain could create mappings.
2) .data isn't included in the tboot MAC checked on resume from S3.
Adjust the comments and virtual address anchors used to define the regions.
Reported-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: c9a4a1c419cebac83a8fb60c4532ad8ccc973dc4
master date: 2017-02-28 16:18:38 +0000
Andrew Cooper [Tue, 14 Mar 2017 11:40:36 +0000 (12:40 +0100)]
x86/vmx: Don't leak host syscall MSR state into HVM guests
hvm_hw_cpu->msr_flags is in fact the VMX dirty bitmap of MSRs needing to be
restored when switching into guest context. It should never have been part of
the migration state to start with, and Xen must not make any decisions based
on the value seen during restore.
Identify it as obsolete in the header files, consistently save it as zero and
ignore it on restore.
The MSRs must be considered dirty during VMCS creation to cause the proper
defaults of 0 to be visible to the guest.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
master commit: 2f1add6e1c8789d979daaafa3d80ddc1bc375783
master date: 2017-02-21 11:06:39 +0000
xen/arm: fix affected memory range by dcache clean functions
clean_dcache_va_range and clean_and_invalidate_dcache_va_range don't
calculate the range correctly when "end" is not cacheline aligned. As a
result, the last cacheline is not skipped. Fix the issue by aligning the
start address to the cacheline size.
In addition, make the code simpler and faster in
invalidate_dcache_va_range, by removing the module operation and using
bitmasks instead. Also remove the size adjustments in
invalidate_dcache_va_range, because the size variable is not used later
on.
Signed-off-by: Stefano Stabellini <sstabellini@kernel.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Reviewed-by: Julien Grall <julien.grall@arm.com> Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Introduce new Xen command line parameter called "vwfi", which stands for
virtual wfi. The default is "trap": Xen traps guest wfi and wfe
instructions. In the case of wfi, Xen calls vcpu_block on the guest
vcpu; in the case of guest wfe, Xen calls vcpu_yield on the guest vcpu.
The behavior can be changed by setting vwfi to "native", in that case
Xen doesn't trap neither wfi nor wfe, running them in guest context.
The result is strong reduction in irq latency (from 5000ns to 2000ns,
measured using https://github.com/edgarigl/tbm, the physical timer, and
1 pcpu dedicated to 1 vcpu). The downside is that the scheduler thinks
that the guest is busy when actually is sleeping, leading to suboptimal
scheduling decisions.
Julien Grall [Fri, 24 Feb 2017 09:01:59 +0000 (10:01 +0100)]
arm/p2m: remove the page from p2m->pages list before freeing it
The p2m code is using the page list field to link all the pages used
for the stage-2 page tables. The page is added into the p2m->pages
list just after the allocation but never removed from the list.
The page list field is also used by the allocator, not removing may
result a later Xen crash due to inconsistency (see [1]).
This bug was introduced by the reworking of p2m code in commit 2ef3e36ec7
"xen/arm: p2m: Introduce p2m_set_entry and __p2m_set_entry".
Jan Beulich [Mon, 20 Feb 2017 14:58:02 +0000 (15:58 +0100)]
VMX: fix VMCS race on context-switch paths
When __context_switch() is being bypassed during original context
switch handling, the vCPU "owning" the VMCS partially loses control of
it: It will appear non-running to remote CPUs, and hence their attempt
to pause the owning vCPU will have no effect on it (as it already
looks to be paused). At the same time the "owning" CPU will re-enable
interrupts eventually (the lastest when entering the idle loop) and
hence becomes subject to IPIs from other CPUs requesting access to the
VMCS. As a result, when __context_switch() finally gets run, the CPU
may no longer have the VMCS loaded, and hence any accesses to it would
fail. Hence we may need to re-load the VMCS in vmx_ctxt_switch_from().
For consistency use the new function also in vmx_do_resume(), to
avoid leaving an open-coded incarnation of it around.
Reported-by: Kevin Mayer <Kevin.Mayer@gdata.de> Reported-by: Anshul Makkar <anshul.makkar@citrix.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Sergey Dyasli <sergey.dyasli@citrix.com> Tested-by: Sergey Dyasli <sergey.dyasli@citrix.com>
master commit: 2f4d2198a9b3ba94c959330b5c94fe95917c364c
master date: 2017-02-17 15:49:56 +0100
George Dunlap [Mon, 20 Feb 2017 14:57:37 +0000 (15:57 +0100)]
xen/p2m: Fix p2m_flush_table for non-nested cases
Commit 71bb7304e7a7a35ea6df4b0cedebc35028e4c159 added flushing of
nested p2m tables whenever the host p2m table changed. Unfortunately
in the process, it added a filter to p2m_flush_table() function so
that the p2m would only be flushed if it was being used as a nested
p2m. This meant that the p2m was not being flushed at all for altp2m
callers.
Only check np2m_base if p2m_class for nested p2m's.
NB that this is not a security issue: The only time this codepath is
called is in cases where either nestedp2m or altp2m is enabled, and
neither of them are in security support.
Reported-by: Matt Leinhos <matt@starlab.io> Signed-off-by: George Dunlap <george.dunlap@citrix.com> Reviewed-by: Tim Deegan <tim@xen.org> Tested-by: Tamas K Lengyel <tamas@tklengyel.com>
master commit: 6192e6378e094094906950120470a621d5b2977c
master date: 2017-02-15 17:15:56 +0000
David Woodhouse [Mon, 20 Feb 2017 14:56:48 +0000 (15:56 +0100)]
x86/ept: allow write-combining on !mfn_valid() MMIO mappings again
For some MMIO regions, such as those high above RAM, mfn_valid() will
return false.
Since the fix for XSA-154 in commit c61a6f74f80e ("x86: enforce
consistent cachability of MMIO mappings"), guests have no longer been
able to use PAT to obtain write-combining on such regions because the
'ignore PAT' bit is set in EPT.
We probably want to err on the side of caution and preserve that
behaviour for addresses in mmio_ro_ranges, but not for normal MMIO
mappings. That necessitates a slight refactoring to check mfn_valid()
later, and let the MMIO case get through to the right code path.
Since we're not bailing out for !mfn_valid() immediately, the range
checks need to be adjusted to cope \97 simply by masking in the low bits
to account for 'order' instead of adding, to avoid overflow when the mfn
is INVALID_MFN (which happens on unmap, since we carefully call this
function to fill in the EMT even though the PTE won't be valid).
The range checks are also slightly refactored to put only one of them in
the fast path in the common case. If it doesn't overlap, then it
*definitely* isn't contained, so we don't need both checks. And if it
overlaps and is only one page, then it definitely *is* contained.
Finally, add a comment clarifying how that 'return -1' works \97 it isn't
returning an error and causing the mapping to fail; it relies on
resolve_misconfig() being able to split the mapping later. So it's
*only* sane to do it where order>0 and the 'problem' will be solved by
splitting the large page. Not for blindly returning 'error', which I was
tempted to do in my first attempt.
Signed-off-by: David Woodhouse <dwmw@amazon.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
master commit: 30921dc2df3665ca1b2593595aa6725ff013d386
master date: 2017-02-07 14:30:01 +0100
There is a possible scenario when (d)->need_iommu remains unset
during guest domain execution. For example, when no devices
were assigned to it. Taking into account that teardown callback
is not called when (d)->need_iommu is unset we might have unreleased
resourses after destroying domain.
So, always call teardown callback to roll back actions
that were performed in init callback.
This is XSA-207.
Signed-off-by: Oleksandr Tyshchenko <olekstysh@gmail.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Tested-by: Jan Beulich <jbeulich@suse.com> Tested-by: Julien Grall <julien.grall@arm.com>
George Dunlap [Thu, 9 Feb 2017 09:25:58 +0000 (10:25 +0100)]
x86/emulate: don't assume that addr_size == 32 implies protected mode
Callers of x86_emulate() generally define addr_size based on the code
segment. In vm86 mode, the code segment is set by the hardware to be
16-bits; but it is entirely possible to enable protected mode, set the
CS to 32-bits, and then disable protected mode. (This is commonly
called "unreal mode".)
But the instruction decoder only checks for protected mode when
addr_size == 16. So in unreal mode, hardware will throw a #UD for VEX
prefixes, but our instruction decoder will decode them, triggering an
ASSERT() further on in _get_fpu(). (With debug=n the emulator will
incorrectly emulate the instruction rather than throwing a #UD, but
this is only a bug, not a crash, so it's not a security issue.)
Teach the instruction decoder to check that we're in protected mode,
even if addr_size is 32.
Signed-off-by: George Dunlap <george.dunlap@citrix.com>
Split real mode and VM86 mode handling, as VM86 mode is strictly 16-bit
at all times. Re-base.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 05118b1596ffe4559549edbb28bd0124a7316123
master date: 2017-01-25 15:09:55 +0100
Dario Faggioli [Thu, 9 Feb 2017 09:25:33 +0000 (10:25 +0100)]
xen: credit2: fix shutdown/suspend when playing with cpupools.
In fact, during shutdown/suspend, we temporarily move all
the vCPUs to the BSP (i.e., pCPU 0, as of now). For Credit2
domains, we call csched2_vcpu_migrate(), expects to find the
target pCPU in the domain's pool
Therefore, if Credit2 is the default scheduler and we have
removed pCPU 0 from cpupool0, shutdown/suspend fails like
this:
****************************************
Panic on CPU 8:
Assertion 'svc->vcpu->processor < nr_cpu_ids' failed at sched_credit2.c:1729
****************************************
On the other hand, if Credit2 is the scheduler of another
pool, when trying (still during shutdown/suspend) to move
the vCPUs of the Credit2 domains to pCPU 0, it figures
out that pCPU 0 is not a Credit2 pCPU, and fails like this:
The solution is to recognise the specific situation, inside
csched2_vcpu_migrate() and, considering it is something temporary,
which only happens during shutdown/suspend, quickly deal with it.
Then, in the resume path, in restore_vcpu_affinity(), things
are set back to normal, and a new v->processor is chosen, for
each vCPU, from the proper set of pCPUs (i.e., the ones of
the proper cpupool).
Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com> Acked-by: George Dunlap <george.dunlap@citrix.com>
xen: credit2: non Credit2 pCPUs are ok during shutdown/suspend.
Commit 7478ebe1602e6 ("xen: credit2: fix shutdown/suspend
when playing with cpupools"), while doing the right thing
for actual code, forgot to update the ASSERT()s accordingly,
in csched2_vcpu_migrate().
In fact, as stated there already, during shutdown/suspend,
we must allow a Credit2 vCPU to temporarily migrate to a
non Credit2 BSP, without any ASSERT() triggering.
Move them down, after the check for whether or not we are
shutting down, where the assumption that the pCPU must be
valid Credit2 ones, is valid.
Dario Faggioli [Thu, 9 Feb 2017 09:24:56 +0000 (10:24 +0100)]
xen: credit2: never consider CPUs outside of our cpupool.
In fact, relying on the mask of what pCPUs belong to
which Credit2 runqueue is not enough. If we only do that,
when Credit2 is the boot scheduler, we may ASSERT() or
panic when moving a pCPU from Pool-0 to another cpupool.
This is because pCPUs outside of any pool are considered
part of cpupool0. This puts us at risk of crash when those
same pCPUs are added to another pool and something
different than the idle domain is found to be running
on them.
Note that, even if we prevent the above to happen (which
is the purpose of this patch), this is still pretty bad,
in fact, when we remove a pCPU from Pool-0:
- in Credit1, as we do *not* update prv->ncpus and
prv->credit, which means we're considering the wrong
total credits when doing accounting;
- in Credit2, the pCPU remains part of one runqueue,
and is hence at least considered during load balancing,
even if no vCPU should really run there.
In Credit1, this "only" causes skewed accounting and
no crashes because there is a lot of `cpumask_and`ing
going on with the cpumask of the domains' cpupool
(which, BTW, comes at a price).
A quick and not to involved (and easily backportable)
solution for Credit2, is to do exactly the same.
Dario Faggioli [Thu, 9 Feb 2017 09:24:32 +0000 (10:24 +0100)]
xen: credit2: use the correct scratch cpumask.
In fact, there is one scratch mask per each CPU. When
you use the one of a CPU, it must be true that:
- the CPU belongs to your cpupool and scheduler,
- you own the runqueue lock (the one you take via
{v,p}cpu_schedule_lock()) for that CPU.
This was not the case within the following functions:
get_fallback_cpu(), csched2_cpu_pick(): as we can't be
sure we either are on, or hold the lock for, the CPU
that is in the vCPU's 'v->processor'.
migrate(): it's ok, when called from balance_load(),
because that comes from csched2_schedule(), which takes
the runqueue lock of the CPU where it executes. But it is
not ok when we come from csched2_vcpu_migrate(), which
can be called from other places.
The fix is to explicitly use the scratch space of the
CPUs for which we know we hold the runqueue lock.
Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com> Reported-by: Jan Beulich <JBeulich@suse.com> Reviewed-by: George Dunlap <george.dunlap@citrix.com>
master commit: 548db8742872399936a2090cbcdfd5e1b34fcbcc
master date: 2017-01-24 17:02:07 +0000
Joao Martins [Thu, 9 Feb 2017 09:23:52 +0000 (10:23 +0100)]
x86/hvm: do not set msr_tsc_adjust on hvm_set_guest_tsc_fixed
Commit 6e03363 ("x86: Implement TSC adjust feature for HVM guest")
implemented TSC_ADJUST MSR for hvm guests. Though while booting
an HVM guest the boot CPU would have a value set with delta_tsc -
guest tsc while secondary CPUS would have 0. For example one can
observe:
$ xen-hvmctx 17 | grep tsc_adjust
TSC_ADJUST: tsc_adjust ff9377dfef47fe66
TSC_ADJUST: tsc_adjust 0
TSC_ADJUST: tsc_adjust 0
TSC_ADJUST: tsc_adjust 0
Upcoming Linux 4.10 now validates whether this MSR is correct and
adjusts them accordingly under the following conditions: values of < 0
(our case for CPU 0) or != 0 or values > 7FFFFFFF. In this conditions it
will force set to 0 and for the CPUs that the value doesn't match all
together. If this msr is not correct we would see messages such as:
And on HVM guests supporting TSC_ADJUST (requiring at least Haswell
Intel) it won't boot.
Our current vCPU 0 values are incorrect and according to Intel SDM which on
section "Time-Stamp Counter Adjustment" states that "On RESET, the value
of the IA32_TSC_ADJUST MSR is 0." hence we should set it 0 and be
consistent across multiple vCPUs. Perhaps this MSR should be only
changed by the guest which already happens through
hvm_set_guest_tsc_adjust(..) routines (see below). After this patch
guests running Linux 4.10 will see a valid IA32_TSC_ADJUST msr of value
0 for all CPUs and are able to boot.
On the same section of the spec ("Time-Stamp Counter Adjustment") it is
also stated:
"If an execution of WRMSR to the IA32_TIME_STAMP_COUNTER MSR
adds (or subtracts) value X from the TSC, the logical processor also
adds (or subtracts) value X from the IA32_TSC_ADJUST MSR.
Unlike the TSC, the value of the IA32_TSC_ADJUST MSR changes only in
response to WRMSR (either to the MSR itself, or to the
IA32_TIME_STAMP_COUNTER MSR). Its value does not otherwise change as
time elapses. Software seeking to adjust the TSC can do so by using
WRMSR to write the same value to the IA32_TSC_ADJUST MSR on each logical
processor."
This suggests these MSRs values should only be changed through guest i.e.
throught write intercept msrs. We keep IA32_TSC MSR logic such that writes
accomodate adjustments to TSC_ADJUST, hence no functional change in the
msr_tsc_adjust for IA32_TSC msr. Though, we do that in a separate routine
namely hvm_set_guest_tsc_msr instead of through hvm_set_guest_tsc(...).
Jan Beulich [Thu, 9 Feb 2017 09:22:55 +0000 (10:22 +0100)]
x86: segment attribute handling adjustments
Null selector loads into SS (possible in 64-bit mode only, and only in
rings other than ring 3) must not alter SS.DPL. (This was found to be
an issue on KVM, and fixed in Linux commit 33ab91103b.)
Further arch_set_info_hvm_guest() didn't make sure that the ASSERT()s
in hvm_set_segment_register() wouldn't trigger: Add further checks, but
tolerate (adjust) clear accessed (CS, SS, DS, ES) and busy (TR) bits.
Finally the setting of the accessed bits for user segments was lost by
commit dd5c85e312 ("x86/hvm: Reposition the modification of raw segment
data from the VMCB/VMCS"), yet VMX requires them to be set for usable
segments. Add respective ASSERT()s (the only path not properly setting
them was arch_set_info_hvm_guest()).
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 366ff5f1b3252f9069d5aedb2ffc2567bb0a37c9
master date: 2017-01-20 14:39:12 +0100
Jan Beulich [Thu, 9 Feb 2017 09:22:28 +0000 (10:22 +0100)]
x86emul: LOCK check adjustments
BT, being encoded as DstBitBase just like BT{C,R,S}, nevertheless does
not write its (register or memory) operand and hence also doesn't allow
a LOCK prefix to be used.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: f2d4f4ba80de8a03a1b0f300d271715a88a8433d
master date: 2017-01-20 14:37:33 +0100
Jan Beulich [Thu, 9 Feb 2017 09:21:50 +0000 (10:21 +0100)]
x86emul: VEX.B is ignored in compatibility mode
While VEX.R and VEX.X are guaranteed to be 1 in compatibility mode
(and hence a respective mode_64bit() check can be dropped), VEX.B can
be encoded as zero, but would be ignored by the processor. Since we
emulate instructions in 64-bit mode (except possibly in the test
harness), we need to force the bit to 1 in order to not act on the
wrong {X,Y,Z}MM register (which has no bad effect on 32-bit test
harness builds, as there the bit would again be ignored by the
hardware, and would by default be expected to be 1 anyway).
We must not, however, fiddle with the high bit of VEX.VVVV in the
decode phase, as that would undermine the checking of instructions
requiring the field to be all ones independent of mode. This is
being enforced in copy_REX_VEX() instead.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
x86emul: correct VEX/XOP/EVEX operand size handling for 16-bit code
Operand size defaults to 32 bits in that case, but would not have been
set that way in the absence of an operand size override.
Reported-by: Wei Liu <wei.liu2@citrix.com> (by AFL fuzzing) Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 89c76ee7f60777b81c8fd0475a6af7c84e72a791
master date: 2017-01-17 10:32:25 +0100
master commit: beb82042447c5d6e7073d816d6afc25c5a423cde
master date: 2017-01-25 15:08:59 +0100
Andrew Cooper [Thu, 9 Feb 2017 09:20:45 +0000 (10:20 +0100)]
x86/xstate: Fix array overrun on hardware with LWP
c/s da62246e4c "x86/xsaves: enable xsaves/xrstors/xsavec in xen" introduced
setup_xstate_features() to allocate and fill xstate_offsets[] and
xstate_sizes[].
However, fls() casts xfeature_mask to 32bits which truncates LWP out of the
calculation. As a result, the arrays are allocated too short, and the cpuid
infrastructure reads off the end of them when calculating xstate_size for the
guest.
On one test system, this results in 0x3fec83c0 being returned as the maximum
size of an xsave area, which surprisingly appears not to bother Windows or
Linux too much. I suspect they both use current size based on xcr0, which Xen
forwards from real hardware.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: fe0d67576e335c02becf1cea8e67005509fa90b6
master date: 2017-01-16 17:37:26 +0000
Tamas K Lengyel [Wed, 25 Jan 2017 16:12:01 +0000 (09:12 -0700)]
arm/p2m: Fix regression during domain shutdown with active mem_access
The change in commit 438c5fe4f0c introduced a regression for domains where
mem_acces is or was active. When relinquish_p2m_mapping attempts to clear
a page where the order is not 0 the following ASSERT is triggered:
Wei Liu [Thu, 29 Dec 2016 16:36:31 +0000 (16:36 +0000)]
libxl: fix libxl_set_memory_target
Commit 26dbc93a ("libxl: Remove pointless hypercall from
libxl_set_memory_target") removed the call to xc_domain_getinfolist, but
it failed to notice that "info" was actually needed later.
Put that back. While at it, make the code conform to coding style
requirement.
Julien Grall [Wed, 18 Jan 2017 18:54:08 +0000 (18:54 +0000)]
xen/arm: gic-v3: Make sure read from ICC_IAR1_EL1 is visible on the redistributor
"The effects of reading ICC_IAR0_EL1 and ICC_IAR1_EL1 on the state of a
returned INTID are not guaranteed to be visible until after the execution
of a DSB".
Because of the GIC is an external component, a dsb sy is required.
Without it the sysreg read may not have been made visible on the
redistributor.
Andrew Cooper [Wed, 18 Jan 2017 08:51:53 +0000 (09:51 +0100)]
x86/emul: Correct the return value handling of VMFUNC
The bracketing of x86_emulate() calling the ops->vmfunc() hook is wrong with
respect to the assignment to rc, which can trip the new assertions in
x86_emulate_wrapper().
The hvmemul_vmfunc() hook should only raise #UD if X86EMUL_EXCEPTION is
returned. This is only a latent bug at the moment.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 3ab1876504d409689824e161a8b04e57e1e5dd46
master date: 2016-12-22 13:32:46 +0000
Jan Beulich [Wed, 18 Jan 2017 08:49:55 +0000 (09:49 +0100)]
x86/boot: fix build with certain older gcc versions
Despite all attempts so far (ending in commit fecf584294 ["Config.mk:
fix comment for debug option"] adjusting the respective comment),
Config.mk's debug= setting still affects the hypervisor build: CFLAGS
gets -g added there.
xen/arch/x86/boot/build32.mk includes that file, and hence inherits the
setting too. Some gcc versions take -g to create an .eh_frame section
despite -fno-asynchronous-unwind-tables (which instead one would expect
to produce .debug_frame).
In turn, commit 93c0c0287a ("x86/boot: create *.lnk files with linker
script") was - in my understanding - supposed to make sure .text is
first, but apparently it did also not really achieve that effect: Both
reloc.lnk and reloc.bin in the case here ended up with .eh_frame first,
which obviously rendered the whole final binary unusable.
Explicitly suppress generation of any kind of debug info when building
reloc.o.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 620b3c7eee78e90167f591877177c922ae619b92
master date: 2016-12-16 14:37:35 +0100
Jan Beulich [Wed, 18 Jan 2017 08:48:57 +0000 (09:48 +0100)]
VT-d: correct dma_msi_set_affinity()
Commit 83cd2038fe ("VT-d: use msi_compose_msg()) together with 15aa6c6748 ("amd iommu: use base platform MSI implementation"),
introducing the use of a per-CPU scratch CPU mask, went too far:
dma_msi_set_affinity() may, at least in theory, be called in
interrupt context, and hence the use of that scratch variable is not
correct.
Since the function overwrites the destination information anyway,
allow msi_compose_msg() to be called with a NULL CPU mask, avoiding
the use of that scratch variable.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 7f885a1f49a75c770360b030666a5c1545156e5c
master date: 2016-12-16 14:33:43 +0100
Jan Beulich [Wed, 18 Jan 2017 08:47:31 +0000 (09:47 +0100)]
x86emul: MOVNTI does not allow REP prefixes
Just like 66, prefixes F3 and F2 cause #UD.
Also adjust a related comment, which in its previous wording was
misleading (as in 16-bit mode there would nothing be undone when
adjusting operand size from 2 to 4).
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 96a7cb37b921d2b320183d194d143262e1dd5b53
master date: 2016-12-14 10:11:08 +0100
Luwei Kang [Wed, 18 Jan 2017 08:46:54 +0000 (09:46 +0100)]
x86/VPMU: clear the overflow status of which counter happened to overflow
Just set the corresponding bits of counters which happened to overflow,
rather than setting all the available bits of IA32_PERF_GLOBAL_OVF_CTRL
when pmu interrupt happened.
Signed-off-by: Luwei Kang <luwei.kang@intel.com> Acked-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 7a0c70482580234868fcc53b8d72e31966dc7c52
master date: 2016-12-13 14:21:26 +0100
Paul Durrant [Wed, 18 Jan 2017 08:46:26 +0000 (09:46 +0100)]
x86/hvm: don't unconditionally create a default ioreq server
Avoid doing so if the domain is not under construction.
If upstream QEMU is in use then it will explicitly create an ioreq server
rather than implicitly creating the default ioreq server, which is a
side-effect of reading HVM_PARAM_IOREQ_PFN, HVM_PARAM_BUFIOREQ_PFN,
or HVM_PARAM_BUFIOREQ_EVTCHN (as is done by legacy QEMUs).
However, if the domain is subsequently saved/migrated then those parameters
are read and hence the default server will be unnecessarily instantiated.
This patch adds an extra check of the 'creation_finished' flag when those
HVM params are read and will only instantiate the server if the domain is
under construction, which will always be the case when QEMU is invoked.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Tested-by: Zhang Chen <zhangchen.fnst@cn.fujitsu.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
x86/hvm: Fix HVMOP_get_param when skipping creating the default ioreq server
c/s e7dabe5 "x86/hvm: don't unconditionally create a default ioreq server"
added a break statement, but the logic previously depended on falling through
into the default case to fill in the value the caller asked for.
This causes the sending migration code to put a junk PARAM into the stream,
and the receiving side to fail to zero the IOREQ pages, causing QEMU to object
when it finds stale requests while starting up.
Reorder the code so it more clearly falls through into the default case.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
master commit: e7dabe59c3239dc9ef9edbc49ed54f754616ebf7
master date: 2016-12-12 09:49:10 +0100
master commit: 451c9938c68ccb77ff94765f7ac47e8de51d3f43
master date: 2016-12-13 09:58:33 +0000
Jan Beulich [Wed, 18 Jan 2017 08:44:49 +0000 (09:44 +0100)]
x86emul: CMPXCHG{8,16}B ignore prefixes
This removes 0F C7 from the list of two-byte opcodes treating prefixes
66, F3, and F2 as opcode extensions. We better manually handle this in
the opcode specific code:
- CMPXCHG8B ignores all these prefixes (its handling is being adjusted
accordingly, with a respective test case added as well, to avoid
re-introducing the subject of XSA-200),
- RDRAND/RDSEED (support to be added subsequently) honor 66, but treat
F3 and F2 as opcode extensions (resolving to RDPID in the RDSEED
case, which in turn ignores 66).
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 54abe826c8297e12f805be2bcf318ef75cc7f58d
master date: 2016-12-14 10:08:22 +0100
Andrew Cooper [Wed, 18 Jan 2017 08:43:47 +0000 (09:43 +0100)]
xen: Fix determining when domain creation is complete
d->creation_finished is used in several places alter behaviour depending on
whether the domain is being created, or is already running.
However, there is a latent bug if a toolstack component makes a pair of
pause/unpause calls, where creation will be considered finished prematurely.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Tested-by: Paul Durrant <paul.durrant@citrix.com>
master commit: 9d71e02e8420b5d4a48d92446a1edbff498ee1c6
master date: 2016-12-13 09:58:33 +0000
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Wei Liu <wei.liu2@citrix.com>
[ wei: fix up conflict ] Signed-off-by: Wei Liu <wei.liu2@citrix.com>
(cherry picked from commit 4d362ce02aaf1699957fb7c0edc6ae5839ccb30e)
Roger Pau Monne [Mon, 19 Dec 2016 15:02:03 +0000 (15:02 +0000)]
init/FreeBSD: fix xencommons so it can only be launched by Dom0
At the moment the execution of xencommons is gated on the presence of the
privcmd device, but that's not correct, since privcmd is available to all Xen
domains (privileged or unprivileged). Instead of using privcmd use the
xenstored device, which will only be available to the domain that's in charge
of running xenstored, and thus xencommons.
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Wei Liu <wei.liu2@citrix.com>
(cherry picked from commit c875b9778da0c56a0c118626771465b87df31fe8)
Roger Pau Monne [Mon, 19 Dec 2016 15:02:02 +0000 (15:02 +0000)]
init/FreeBSD: remove xendriverdomain_precmd
...because it's empty. While there also rename xendriverdomain_startcmd to
xendriverdomain_start in order to match the nomenclature of the file.
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Wei Liu <wei.liu2@citrix.com>
[ wei: fix up minor error ] Signed-off-by: Wei Liu <wei.liu2@citrix.com>
(cherry picked from commit 29b968e46b215bea8881abdfd06a046417b83006)
Roger Pau Monne [Mon, 19 Dec 2016 15:02:01 +0000 (15:02 +0000)]
init/FreeBSD: set correct PATH for xl devd
FreeBSD init scripts don't have /usr/local/{bin/sbin} in it's PATH, which
prevents `xl devd` from working properly since hotplug scripts require the set
of xenstore cli tools to be in PATH.
While there also fix the usage of --pidfile, which according to the xl help
doesn't use "=", and add braces around XLDEVD_PIDFILE.
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Wei Liu <wei.liu2@citrix.com>
(cherry picked from commit 1d52073334d7615934fe804bc656b7aab0e92ebd)
Anshul Makkar [Mon, 12 Dec 2016 14:00:05 +0000 (14:00 +0000)]
xsm: allow relevant permission during migrate and gpu-passthrough.
During guest migrate allow permission to prevent
spurious page faults.
Prevents these errors:
d73: Non-privileged (73) attempt to map I/O space 00000000
GPU passthrough for hvm guest:
avc: denied { send_irq } for domid=0 target=10
scontext=system_u:system_r:dom0_t
tcontext=system_u:system_r:domU_t tclass=hvm
Signed-off-by: Anshul Makkar <anshul.makkar@citrix.com> Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
(cherry picked from commit f04722f78b0f64e1f147389962d8f393a2fa8a7a)
Signed-off-by: Wei Liu <wei.liu2@citrix.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(cherry picked from commit 1361db0ed3ad1217bd039a3cac5df49a622e12a9)
AND:
set rc to 0 in init_acpi_config in success path
xc_doamin_getinfo returns >=0 in success path, and if there is no vnode
configured, that rc will be returned to caller, which indicates error.
Fix that by setting rc to 0 in success path.
Reported-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Wei Liu <wei.liu2@citrix.com> Tested-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
(cherry picked from commit 08ccb46924385c833bd0da9e087fb6b96fa76849)
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Andrew Cooper [Thu, 22 Dec 2016 15:23:37 +0000 (16:23 +0100)]
x86/emul: add likely()/unlikely() to test harness
Fix a build problem introduced in c/s 122dd9575c7 "x86emul:
in_longmode() should not ignore ->read_msr() errors" by providing an
implementation of likely()/unlikely().
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Cherrypicked out of:
Jan Beulich [Wed, 21 Dec 2016 16:40:37 +0000 (17:40 +0100)]
x86: force EFLAGS.IF on when exiting to PV guests
Guest kernels modifying instructions in the process of being emulated
for another of their vCPU-s may effect EFLAGS.IF to be cleared upon
next exiting to guest context, by converting the being emulated
instruction to CLI (at the right point in time). Prevent any such bad
effects by always forcing EFLAGS.IF on. And to cover hypothetical other
similar issues, also force EFLAGS.{IOPL,NT,VM} to zero.
This is CVE-2016-10024 / XSA-202.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 0e47f92b072548800223f9a21ea051a017173915
master date: 2016-12-21 16:46:13 +0100
Dario Faggioli [Tue, 29 Nov 2016 15:01:03 +0000 (16:01 +0100)]
credit2: make runqueues be per-socket by default
Benchmarks have shown that per-socket runqueues arrangement
behaves better (e.g., we achieve better load balancing)
than the current per-core default.
Here's an example (coming from
https://lists.xen.org/archives/html/xen-devel/2016-06/msg02287.html ):
|=======================================|
| XEN BUILD TIME, LOW LOAD, NO NOISE |
|---------------------------------------|
| runq=core runq=socket |
| 35.200 33.433 |
|---------------------------------------|------------------------------|
| XEN BUILD TIME, HIGH LOAD, NO NOISE | IPERF, HIGH LOAD, NO NOISE |
|---------------------------------------|------------------------------|
| runq=core runq=socket | runq=core runq=socket |
| 18.013 18.530 | 23.200 23.466 |
|---------------------------------------|------------------------------|
| XEN BUILD TIME, LOW LOAD, WITH NOISE |
|------------------------------------- |
| runq=core runq=socket |
| 45.866 39.493 |
|---------------------------------------|------------------------------|
| XEN BUILD TIME, HIGH LOAD, WITH NOISE | IPERF, HIGH LOAD, WITH NOISE |
|---------------------------------------|------------------------------|
| runq=core runq=socket | runq=core runq=socket |
| 36.840 29.080 | 19.967 21.000 |
|=======================================|==============================|
The only reason why we went for per-core, initially, was to
introduce some form of hyperthreading support. Now we have
hyperthreading support, independently from how runqueues
are organized (9bb9c7388 "xen: credit2: implement true SMT
support"), and thus we can switch to per-socket.
Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com> Acked-by: George Dunlap <george.dunlap@eu.citrix.com> Release-acked-by: Wei Liu <wei.liu2@citrix.com>
Julien Grall [Tue, 29 Nov 2016 15:00:48 +0000 (16:00 +0100)]
libacpi: fix compilation when cross building the tools
The tools (such as mk_dsdt) can be cross-built when it may not be
desirable to build them on the target.
The commit c4ac1077 "libxl/arm: Generate static ACPI DSDT table"
introduced support of ARM64 in mk_dsdt but also break cross-building
tools because the ACPI tables are not correct.
While mk_dsdt should generate ACPI table for the target architecture, it
currently generates the one for the host. This is because the source
code contains reference to the host architecture (__aarch64__,
__x86_64__, __i386__) when it should be the target architecture.
Replace all __aarch64__, __x86_64__, __i386__ by the corresponding
CONFIG_*.
Also expose the CONFIG_* to the source code as the currently only
exposed to the Makefile.
Reported-by: Andrii Anisov <andrii.anisov@gmail.com> Suggested-by: Wei Liu <wei.liu2@citrix.com> Signed-off-by: Julien Grall <julien.grall@arm.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Release-acked-by: Wei Liu <wei.liu2@citrix.com>
Wei Chen [Tue, 29 Nov 2016 14:59:55 +0000 (15:59 +0100)]
arm32: handle async aborts delivered while at HYP
If guest generates an asynchronous abort and then traps into HYP
(by HVC or IRQ) before the abort has been delivered, the hypervisor
could not catch it, because the PSTATE.A bit is masked all the time
in hypervisor. So this asynchronous abort may be slipped to next
running guest with PSTATE.A bit unmasked.
In order to avoid this, it is necessary to take the abort at HYP, by
clearing the PSTATE.A bit. In this patch, we unmask the PSTATE.A bit
to open a window to catch guest-generated asynchronous abort in all
Guest -> HYP switch paths. If we caught such asynchronous abort in
checking window, the HYP data abort exception will be triggered and
the abort source guest will be crashed.
Wei Chen [Tue, 29 Nov 2016 14:58:57 +0000 (15:58 +0100)]
arm64: handle async aborts delivered while at EL2
If EL1 generates an asynchronous abort and then traps into EL2
(by HVC or IRQ) before the abort has been delivered, the hypervisor
could not catch it, because the PSTATE.A bit is masked all the time
in hypervisor. So this asynchronous abort may be slipped to next
running guest with PSTATE.A bit unmasked.
In order to avoid this, it is necessary to take the abort at EL2, by
clearing the PSTATE.A bit. In this patch, we unmask the PSTATE.A bit
to open a window to catch guest-generated asynchronous abort in all
EL1 -> EL2 swich paths. If we catched such asynchronous abort in
checking window, the hyp_error exception will be triggered and the
abort source guest will be crashed.
In current code, when the hypervisor receives an asynchronous abort
from a guest, the hypervisor will do panic, the host will be down.
We have to prevent such security issue, so, in this patch we crash
the guest, when the hypervisor receives an asynchronous abort from
the guest.
Juergen Gross [Fri, 25 Nov 2016 13:32:44 +0000 (14:32 +0100)]
remove reference to xensource.com
xen/include/public/hvm/pvdrivers.h contains a reference to
xen-devel@lists.xensource.com. Replace it by the correct address
xen-devel@lists.xenproject.org
Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Dario Faggioli [Fri, 25 Nov 2016 13:32:19 +0000 (14:32 +0100)]
blkif: kill some repetitions in protocol description
The whole block describing multiqueue support was repeated
two times.
There also was some repetition in the description of the
'discard-enable' property.
Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com> Release-acked-by: Wei Liu <wei.liu2@citrix.com> Reviewed-by: Konrad Rzeszutek Will <Konrad.wilk@oracle.com>
Jan Beulich [Fri, 25 Nov 2016 13:30:58 +0000 (14:30 +0100)]
x86: re-add stack alignment check
Commit 279840d5ea ("x86/boot: install trap handlers much earlier on
boot"), perhaps not really intentionally, removed this check. Add it
back,
- preventing it from triggering before any output is set up,
- accompanying it with a (weaker, due to its open coding of what
get_stack_bottom() does) build time check.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Release-acked-by: Wei Liu <wei.liu2@citrix.com>
Andrew Cooper [Thu, 24 Nov 2016 15:36:13 +0000 (15:36 +0000)]
x86/vmx: Don't deliver #MC with an error code
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Kevin Tian <kevin.tian@intel.com> Release-acked-by: Wei Liu <wei.liu2@citrix.com>