Jonathan Davies [Thu, 23 Mar 2017 16:28:34 +0000 (16:28 +0000)]
oxenstored: log request and response during transaction replay
During a transaction replay, the replayed requests and the new responses are
logged in the same way as the original requests and the original responses.
Reported-by: Juergen Gross <jgross@suse.com> Signed-off-by: Jonathan Davies <jonathan.davies@citrix.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jon Ludlam <jonathan.ludlam@citrix.com> Reviewed-by: Euan Harris <euan.harris@citrix.com> Acked-by: David Scott <dave@recoil.org>
Jonathan Davies [Thu, 23 Mar 2017 16:28:19 +0000 (16:28 +0000)]
oxenstored: replay transaction upon conflict
The existing transaction merge algorithm keeps track of the least upper bound
(longest common prefix) of all the nodes which have been read and written, and
will re-combine two stores which have disjoint upper bounds. This works well for
small transactions but causes unnecessary conflicts for ones that span a large
subtree, such as the following ones used by the xapi toolstack:
* VM start: creates /vm/... /vss/... /local/domain/...
The least upper bound of this transaction is / and so all
these transactions conflict with everything.
* Device hotplug: creates /local/domain/0/... /local/domain/n/...
The least upper bound of this transaction is /local/domain so
all these transactions conflict with each other.
If the existing merge algorithm cannot merge and commit, we attempt
a /replay/ of the failed transaction against the new store.
When we replay the requests we check whether the response sent to the client is
the same as during the first attempt at the transaction. If the responses are
all the same then the transaction replay can be committed. If any differ then
the transaction replay must be aborted and the client must retry.
This algorithm uses the intuition that the transactions made by the toolstack
are designed to be for separate domains, and should fundamentally not conflict
in the sense that they don't read or write any shared keys. By replaying the
transaction on the server side we do what the client would have to do anyway,
only we can do it quickly without allowing any other requests to interfere.
Performing 300 parallel simulated VM start and shutdowns without this code:
300 parallel starts and shutdowns: 268.92
Performing 300 parallel simulated VM start and shutdowns with this code:
300 parallel starts and shutdowns: 3.80
Reported-by: Juergen Gross <jgross@suse.com> Signed-off-by: Dave Scott <dave@recoil.org> Signed-off-by: Jonathan Davies <jonathan.davies@citrix.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jon Ludlam <jonathan.ludlam@citrix.com> Reviewed-by: Euan Harris <euan.harris@citrix.com> Acked-by: David Scott <dave@recoil.org>
Jonathan Davies [Thu, 23 Mar 2017 16:28:08 +0000 (16:28 +0000)]
oxenstored: move functions that process simple operations
Separate the functions which process operations that can be done as part of a
transaction. Specifically, these operations are: read, write, rm, getperms,
setperms, getdomainpath, directory, mkdir.
Also split function_of_type into two functions: one for processing the simple
operations and one for processing the rest.
This will help allow replay of transactions, allowing us to invoke the functions
that process the simple operations as part of the processing of transaction_end.
Reported-by: Juergen Gross <jgross@suse.com> Signed-off-by: Jonathan Davies <jonathan.davies@citrix.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jon Ludlam <jonathan.ludlam@citrix.com> Reviewed-by: Euan Harris <euan.harris@citrix.com> Acked-by: David Scott <dave@recoil.org>
Jonathan Davies [Thu, 23 Mar 2017 16:27:58 +0000 (16:27 +0000)]
oxenstored: keep track of each transaction's operations
A list of (request, response) pairs from the operations performed within the
transaction will be useful to support transaction replay.
Since this consumes memory, the number of requests per transaction must not be
left unbounded. Hence a new quota for this is introduced. This quota, configured
via the configuration key 'quota-maxrequests', limits the size of transactions
initiated by domUs.
After the maximum number of requests has been exhausted, any further requests
will result in EQUOTA errors. The client may then choose to end the transaction;
a successful commit will result in the retention of only the prior requests.
Reported-by: Juergen Gross <jgross@suse.com> Signed-off-by: Jonathan Davies <jonathan.davies@citrix.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jon Ludlam <jonathan.ludlam@citrix.com> Reviewed-by: Euan Harris <euan.harris@citrix.com> Acked-by: David Scott <dave@recoil.org>
Jonathan Davies [Thu, 23 Mar 2017 16:27:50 +0000 (16:27 +0000)]
oxenstored: refactor request processing
Encapsulate the request in a record that is passed from do_input to
process_packet and input_handle_error.
This will be helpful when keeping track of the requests made as part of a
transaction.
Reported-by: Juergen Gross <jgross@suse.com> Signed-off-by: Jonathan Davies <jonathan.davies@citrix.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jon Ludlam <jonathan.ludlam@citrix.com> Reviewed-by: Euan Harris <euan.harris@citrix.com> Acked-by: David Scott <dave@recoil.org>
Jonathan Davies [Thu, 23 Mar 2017 16:27:39 +0000 (16:27 +0000)]
oxenstored: remove some unused parameters
Reported-by: Juergen Gross <jgross@suse.com> Signed-off-by: Jonathan Davies <jonathan.davies@citrix.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jon Ludlam <jonathan.ludlam@citrix.com> Reviewed-by: Euan Harris <euan.harris@citrix.com> Acked-by: David Scott <dave@recoil.org>
Jonathan Davies [Thu, 23 Mar 2017 16:27:23 +0000 (16:27 +0000)]
oxenstored: refactor putting response on wire
Previously, the functions reply_{ack,data,data_or_ack} and input_handle_error
put the response on the wire by invoking Connection.send_{ack,reply,error}.
Instead, these functions now return a value indicating what needs to be put on
the wire, and that action is done by a send_response function called
afterwards.
This refactoring gives us a chance to store the value of the response, useful
for replaying transactions.
Reported-by: Juergen Gross <jgross@suse.com> Signed-off-by: Jonathan Davies <jonathan.davies@citrix.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jon Ludlam <jonathan.ludlam@citrix.com> Reviewed-by: Euan Harris <euan.harris@citrix.com> Acked-by: David Scott <dave@recoil.org>
Jan Beulich [Tue, 4 Apr 2017 13:02:33 +0000 (15:02 +0200)]
memory: properly check guest memory ranges in XENMEM_exchange handling
The use of guest_handle_okay() here (as introduced by the XSA-29 fix)
is insufficient here, guest_handle_subrange_okay() needs to be used
instead.
Note that the uses are okay in
- XENMEM_add_to_physmap_batch handling due to the size field being only
16 bits wide,
- livepatch_list() due to the limit of 1024 enforced on the
number-of-entries input (leaving aside the fact that this can be
called by a privileged domain only anyway),
- compat mode handling due to counts there being limited to 32 bits,
- everywhere else due to guest arrays being accessed sequentially from
index zero.
This is CVE-2017-7228 / XSA-212.
Reported-by: Jann Horn <jannh@google.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 938fd2586eb081bcbd694f4c1f09ae6a263b0d90
master date: 2017-04-04 14:47:46 +0200
Dario Faggioli [Fri, 31 Mar 2017 07:03:32 +0000 (09:03 +0200)]
xen: sched: don't call hooks of the wrong scheduler via VCPU2OP
Within context_saved(), we call the context_saved hook,
and we use VCPU2OP() to determine from what scheduler.
VCPU2OP uses DOM2OP, which uses d->cpupool, which is
NULL when d is the idle domain. And in that case,
DOM2OP just returns ops, the scheduler of cpupool0.
Therefore, if:
- cpupool0's scheduler defines context_saved (like
Credit2 and RTDS do),
- we are not in cpupool0 (i.e., our scheduler is
not ops),
- we are context switching from idle,
we call VCPU2OP(idle_vcpu), which means
DOM2OP(idle->cpupool), which is ops.
Therefore, we both:
- check if context_saved is defined in the wrong
scheduler;
- if yes, call the wrong one.
When using Credit2 at boot, and also Credit2 in
the other cpupool, this is wrong but innocuous,
because it only involves the idle vcpus.
When using Credit2 at boot, and Credit1 in the
other cpupool, this is *totally* wrong, and
it's by chance it does not explode!
When using Credit2 and other schedulers I'm
developping, I hit the following assert (in
sched_credit2.c, on a CPU inside a cpupool that
does not use Credit2):
Jan Beulich [Fri, 31 Mar 2017 07:03:01 +0000 (09:03 +0200)]
x86/EFI: avoid Xen image when looking for module/kexec position
When booting straight from EFI, we don't further try to relocate Xen.
As a result, so far we also didn't avoid the area Xen uses when looking
for a location to put modules or the kexec area. Introduce a fake
module slot to deal with that without having to fiddle with a lot of
code.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: e22e1c47958a4778cd7baa3980f74e52f525ba28
master date: 2017-03-20 09:27:12 +0100
Roger Pau Monné [Fri, 31 Mar 2017 07:00:50 +0000 (09:00 +0200)]
build/clang: fix XSM dummy policy when using clang 4.0
There seems to be some weird bug in clang 4.0 that prevents xsm_pmu_op from
working as expected, and vpmu.o ends up with a reference to
__xsm_action_mismatch_detected which makes the build fail:
[...]
ld -melf_x86_64_fbsd -T xen.lds -N prelink.o \
xen/common/symbols-dummy.o -o xen/.xen-syms.0
prelink.o: In function `xsm_default_action':
xen/include/xsm/dummy.h:80: undefined reference to `__xsm_action_mismatch_detected'
xen/xen/include/xsm/dummy.h:80: relocation truncated to fit: R_X86_64_PC32 against undefined symbol `__xsm_action_mismatch_detected'
ld: xen/xen/.xen-syms.0: hidden symbol `__xsm_action_mismatch_detected' isn't defined
The current patch is the only way I've found to fix this so far, by simply
moving the XSM_PRIV check into the default case in xsm_pmu_op. This also fixes
the behavior of do_xenpmu_op, which will now return -EINVAL for unknown
XENPMU_* operations, instead of -EPERM when called by a privileged domain.
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
master commit: 9e4d116faff4545a7f21c2b01008e94d68e6db58
master date: 2017-03-14 18:19:29 +0100
Roger Pau Monné [Fri, 31 Mar 2017 07:00:13 +0000 (09:00 +0200)]
x86: drop unneeded __packed attributes
There where a couple of unneeded packed attributes in several x86-specific
structures, that are obviously aligned. The only non-trivial one is
vmcb_struct, which has been checked to have the same layout with and without
the packed attribute using pahole. In that case add a build-time size check to
be on the safe side.
No functional change is expected as a result of this commit.
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
master commit: 4036e7c592905c2292cdeba8269e969959427237
master date: 2017-03-07 17:11:06 +0100
Dario Faggioli [Tue, 14 Mar 2017 13:05:41 +0000 (14:05 +0100)]
xen: credit2: don't miss accounting while doing a credit reset.
A credit reset basically means going through all the
vCPUs of a runqueue and altering their credits, as a
consequence of a 'scheduling epoch' having come to an
end.
Blocked or runnable vCPUs are fine, all the credits
they've spent running so far have been accounted to
them when they were scheduled out.
But if a vCPU is running on a pCPU, when a reset event
occurs (on another pCPU), that does not get properly
accounted. Let's therefore begin to do so, for better
accuracy and fairness.
In fact, after this patch, we see this in a trace:
Which shows how d1v5 actually executed for ~9.796 ms,
on pCPU 10, when reset_credit() is executed, on pCPU
12, because of d1v6's credits going below 0.
Without this patch, this 9.796ms are not accounted
to anyone. With this patch, d1v5 is charged for that,
and its credits drop down from 9796548 to 201805.
And this is important, as it means that it will
begin the new epoch with 10201805 credits, instead
of 10500000 (which he would have, before this patch).
Basically, we were forgetting one round of accounting
in epoch x, for the vCPUs that are running at the time
the epoch ends. And this meant favouring a little bit
these same vCPUs, in epoch x+1, providing them with
the chance of execute longer than their fair share.
Dario Faggioli [Tue, 14 Mar 2017 13:05:14 +0000 (14:05 +0100)]
xen: credit2: always mark a tickled pCPU as... tickled!
In fact, whether or not a pCPU has been tickled, and is
therefore about to re-schedule, is something we look at
and base decisions on in various places.
So, let's make sure that we do that basing on accurate
information.
While there, also tweak a little bit smt_idle_mask_clear()
(used for implementing SMT support), so that it only alter
the relevant cpumask when there is the actual need for this.
(This is only for reduced overhead, behavior remains the
same).
Andrew Cooper [Tue, 14 Mar 2017 13:04:37 +0000 (14:04 +0100)]
x86/vmx: Don't leak host syscall MSR state into HVM guests
hvm_hw_cpu->msr_flags is in fact the VMX dirty bitmap of MSRs needing to be
restored when switching into guest context. It should never have been part of
the migration state to start with, and Xen must not make any decisions based
on the value seen during restore.
Identify it as obsolete in the header files, consistently save it as zero and
ignore it on restore.
The MSRs must be considered dirty during VMCS creation to cause the proper
defaults of 0 to be visible to the guest.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
master commit: 2f1add6e1c8789d979daaafa3d80ddc1bc375783
master date: 2017-02-21 11:06:39 +0000
xen/arm: fix affected memory range by dcache clean functions
clean_dcache_va_range and clean_and_invalidate_dcache_va_range don't
calculate the range correctly when "end" is not cacheline aligned. As a
result, the last cacheline is not skipped. Fix the issue by aligning the
start address to the cacheline size.
In addition, make the code simpler and faster in
invalidate_dcache_va_range, by removing the module operation and using
bitmasks instead. Also remove the size adjustments in
invalidate_dcache_va_range, because the size variable is not used later
on.
Signed-off-by: Stefano Stabellini <sstabellini@kernel.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Reviewed-by: Julien Grall <julien.grall@arm.com> Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Introduce new Xen command line parameter called "vwfi", which stands for
virtual wfi. The default is "trap": Xen traps guest wfi and wfe
instructions. In the case of wfi, Xen calls vcpu_block on the guest
vcpu; in the case of guest wfe, Xen calls vcpu_yield on the guest vcpu.
The behavior can be changed by setting vwfi to "native", in that case
Xen doesn't trap neither wfi nor wfe, running them in guest context.
The result is strong reduction in irq latency (from 5000ns to 2000ns,
measured using https://github.com/edgarigl/tbm, the physical timer, and
1 pcpu dedicated to 1 vcpu). The downside is that the scheduler thinks
that the guest is busy when actually is sleeping, leading to suboptimal
scheduling decisions.
Jan Beulich [Mon, 20 Feb 2017 15:06:20 +0000 (16:06 +0100)]
VMX: fix VMCS race on context-switch paths
When __context_switch() is being bypassed during original context
switch handling, the vCPU "owning" the VMCS partially loses control of
it: It will appear non-running to remote CPUs, and hence their attempt
to pause the owning vCPU will have no effect on it (as it already
looks to be paused). At the same time the "owning" CPU will re-enable
interrupts eventually (the lastest when entering the idle loop) and
hence becomes subject to IPIs from other CPUs requesting access to the
VMCS. As a result, when __context_switch() finally gets run, the CPU
may no longer have the VMCS loaded, and hence any accesses to it would
fail. Hence we may need to re-load the VMCS in vmx_ctxt_switch_from().
For consistency use the new function also in vmx_do_resume(), to
avoid leaving an open-coded incarnation of it around.
Reported-by: Kevin Mayer <Kevin.Mayer@gdata.de> Reported-by: Anshul Makkar <anshul.makkar@citrix.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Sergey Dyasli <sergey.dyasli@citrix.com> Tested-by: Sergey Dyasli <sergey.dyasli@citrix.com>
master commit: 2f4d2198a9b3ba94c959330b5c94fe95917c364c
master date: 2017-02-17 15:49:56 +0100
George Dunlap [Mon, 20 Feb 2017 15:05:52 +0000 (16:05 +0100)]
xen/p2m: Fix p2m_flush_table for non-nested cases
Commit 71bb7304e7a7a35ea6df4b0cedebc35028e4c159 added flushing of
nested p2m tables whenever the host p2m table changed. Unfortunately
in the process, it added a filter to p2m_flush_table() function so
that the p2m would only be flushed if it was being used as a nested
p2m. This meant that the p2m was not being flushed at all for altp2m
callers.
Only check np2m_base if p2m_class for nested p2m's.
NB that this is not a security issue: The only time this codepath is
called is in cases where either nestedp2m or altp2m is enabled, and
neither of them are in security support.
Reported-by: Matt Leinhos <matt@starlab.io> Signed-off-by: George Dunlap <george.dunlap@citrix.com> Reviewed-by: Tim Deegan <tim@xen.org> Tested-by: Tamas K Lengyel <tamas@tklengyel.com>
master commit: 6192e6378e094094906950120470a621d5b2977c
master date: 2017-02-15 17:15:56 +0000
David Woodhouse [Mon, 20 Feb 2017 15:05:29 +0000 (16:05 +0100)]
x86/ept: allow write-combining on !mfn_valid() MMIO mappings again
For some MMIO regions, such as those high above RAM, mfn_valid() will
return false.
Since the fix for XSA-154 in commit c61a6f74f80e ("x86: enforce
consistent cachability of MMIO mappings"), guests have no longer been
able to use PAT to obtain write-combining on such regions because the
'ignore PAT' bit is set in EPT.
We probably want to err on the side of caution and preserve that
behaviour for addresses in mmio_ro_ranges, but not for normal MMIO
mappings. That necessitates a slight refactoring to check mfn_valid()
later, and let the MMIO case get through to the right code path.
Since we're not bailing out for !mfn_valid() immediately, the range
checks need to be adjusted to cope \97 simply by masking in the low bits
to account for 'order' instead of adding, to avoid overflow when the mfn
is INVALID_MFN (which happens on unmap, since we carefully call this
function to fill in the EMT even though the PTE won't be valid).
The range checks are also slightly refactored to put only one of them in
the fast path in the common case. If it doesn't overlap, then it
*definitely* isn't contained, so we don't need both checks. And if it
overlaps and is only one page, then it definitely *is* contained.
Finally, add a comment clarifying how that 'return -1' works \97 it isn't
returning an error and causing the mapping to fail; it relies on
resolve_misconfig() being able to split the mapping later. So it's
*only* sane to do it where order>0 and the 'problem' will be solved by
splitting the large page. Not for blindly returning 'error', which I was
tempted to do in my first attempt.
Signed-off-by: David Woodhouse <dwmw@amazon.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
master commit: 30921dc2df3665ca1b2593595aa6725ff013d386
master date: 2017-02-07 14:30:01 +0100
Andrew Cooper [Mon, 20 Feb 2017 15:04:52 +0000 (16:04 +0100)]
x86/VT-x: Dump VMCS on VMLAUNCH/VMRESUME failure
If a VMLAUNCH/VMRESUME fails due to invalid control or host state, dump the
VMCS before crashing the domain.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: d0fd9ae54491328b10dee4003656c14b3bf3d3e9
master date: 2016-07-04 10:51:48 +0100
There is a possible scenario when (d)->need_iommu remains unset
during guest domain execution. For example, when no devices
were assigned to it. Taking into account that teardown callback
is not called when (d)->need_iommu is unset we might have unreleased
resourses after destroying domain.
So, always call teardown callback to roll back actions
that were performed in init callback.
This is XSA-207.
Signed-off-by: Oleksandr Tyshchenko <olekstysh@gmail.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Tested-by: Jan Beulich <jbeulich@suse.com> Tested-by: Julien Grall <julien.grall@arm.com>
George Dunlap [Thu, 9 Feb 2017 09:37:08 +0000 (10:37 +0100)]
x86/emulate: don't assume that addr_size == 32 implies protected mode
Callers of x86_emulate() generally define addr_size based on the code
segment. In vm86 mode, the code segment is set by the hardware to be
16-bits; but it is entirely possible to enable protected mode, set the
CS to 32-bits, and then disable protected mode. (This is commonly
called "unreal mode".)
But the instruction decoder only checks for protected mode when
addr_size == 16. So in unreal mode, hardware will throw a #UD for VEX
prefixes, but our instruction decoder will decode them, triggering an
ASSERT() further on in _get_fpu(). (With debug=n the emulator will
incorrectly emulate the instruction rather than throwing a #UD, but
this is only a bug, not a crash, so it's not a security issue.)
Teach the instruction decoder to check that we're in protected mode,
even if addr_size is 32.
Signed-off-by: George Dunlap <george.dunlap@citrix.com>
Split real mode and VM86 mode handling, as VM86 mode is strictly 16-bit
at all times. Re-base.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 05118b1596ffe4559549edbb28bd0124a7316123
master date: 2017-01-25 15:09:55 +0100
Joao Martins [Thu, 9 Feb 2017 09:35:58 +0000 (10:35 +0100)]
x86/hvm: do not set msr_tsc_adjust on hvm_set_guest_tsc_fixed
Commit 6e03363 ("x86: Implement TSC adjust feature for HVM guest")
implemented TSC_ADJUST MSR for hvm guests. Though while booting
an HVM guest the boot CPU would have a value set with delta_tsc -
guest tsc while secondary CPUS would have 0. For example one can
observe:
$ xen-hvmctx 17 | grep tsc_adjust
TSC_ADJUST: tsc_adjust ff9377dfef47fe66
TSC_ADJUST: tsc_adjust 0
TSC_ADJUST: tsc_adjust 0
TSC_ADJUST: tsc_adjust 0
Upcoming Linux 4.10 now validates whether this MSR is correct and
adjusts them accordingly under the following conditions: values of < 0
(our case for CPU 0) or != 0 or values > 7FFFFFFF. In this conditions it
will force set to 0 and for the CPUs that the value doesn't match all
together. If this msr is not correct we would see messages such as:
And on HVM guests supporting TSC_ADJUST (requiring at least Haswell
Intel) it won't boot.
Our current vCPU 0 values are incorrect and according to Intel SDM which on
section "Time-Stamp Counter Adjustment" states that "On RESET, the value
of the IA32_TSC_ADJUST MSR is 0." hence we should set it 0 and be
consistent across multiple vCPUs. Perhaps this MSR should be only
changed by the guest which already happens through
hvm_set_guest_tsc_adjust(..) routines (see below). After this patch
guests running Linux 4.10 will see a valid IA32_TSC_ADJUST msr of value
0 for all CPUs and are able to boot.
On the same section of the spec ("Time-Stamp Counter Adjustment") it is
also stated:
"If an execution of WRMSR to the IA32_TIME_STAMP_COUNTER MSR
adds (or subtracts) value X from the TSC, the logical processor also
adds (or subtracts) value X from the IA32_TSC_ADJUST MSR.
Unlike the TSC, the value of the IA32_TSC_ADJUST MSR changes only in
response to WRMSR (either to the MSR itself, or to the
IA32_TIME_STAMP_COUNTER MSR). Its value does not otherwise change as
time elapses. Software seeking to adjust the TSC can do so by using
WRMSR to write the same value to the IA32_TSC_ADJUST MSR on each logical
processor."
This suggests these MSRs values should only be changed through guest i.e.
throught write intercept msrs. We keep IA32_TSC MSR logic such that writes
accomodate adjustments to TSC_ADJUST, hence no functional change in the
msr_tsc_adjust for IA32_TSC msr. Though, we do that in a separate routine
namely hvm_set_guest_tsc_msr instead of through hvm_set_guest_tsc(...).
Jan Beulich [Thu, 9 Feb 2017 09:35:24 +0000 (10:35 +0100)]
x86: segment attribute handling adjustments
Null selector loads into SS (possible in 64-bit mode only, and only in
rings other than ring 3) must not alter SS.DPL. (This was found to be
an issue on KVM, and fixed in Linux commit 33ab91103b.)
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 366ff5f1b3252f9069d5aedb2ffc2567bb0a37c9
master date: 2017-01-20 14:39:12 +0100
Jan Beulich [Thu, 9 Feb 2017 09:35:02 +0000 (10:35 +0100)]
x86emul: LOCK check adjustments
BT, being encoded as DstBitBase just like BT{C,R,S}, nevertheless does
not write its (register or memory) operand and hence also doesn't allow
a LOCK prefix to be used.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: f2d4f4ba80de8a03a1b0f300d271715a88a8433d
master date: 2017-01-20 14:37:33 +0100
Jan Beulich [Thu, 9 Feb 2017 09:34:28 +0000 (10:34 +0100)]
x86emul: VEX.B is ignored in compatibility mode
While VEX.R and VEX.X are guaranteed to be 1 in compatibility mode
(and hence a respective mode_64bit() check can be dropped), VEX.B can
be encoded as zero, but would be ignored by the processor. Since we
emulate instructions in 64-bit mode (except possibly in the test
harness), we need to force the bit to 1 in order to not act on the
wrong {X,Y,Z}MM register (which has no bad effect on 32-bit test
harness builds, as there the bit would again be ignored by the
hardware, and would by default be expected to be 1 anyway).
We must not, however, fiddle with the high bit of VEX.VVVV in the
decode phase, as that would undermine the checking of instructions
requiring the field to be all ones independent of mode. This is
being enforced in copy_REX_VEX() instead.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
x86emul: correct VEX/XOP/EVEX operand size handling for 16-bit code
Operand size defaults to 32 bits in that case, but would not have been
set that way in the absence of an operand size override.
Reported-by: Wei Liu <wei.liu2@citrix.com> (by AFL fuzzing) Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 89c76ee7f60777b81c8fd0475a6af7c84e72a791
master date: 2017-01-17 10:32:25 +0100
master commit: beb82042447c5d6e7073d816d6afc25c5a423cde
master date: 2017-01-25 15:08:59 +0100
I backported 3658f7a0bdd8fcda217559927e25263a07398c27
libxl: fix libxl_set_memory_target
but this was not necessary, because the commit it fixes is not in
Xen 4.6.
So the backport introduces a duplicate cal to xc_domain_getinfolist
and also breaks the build with
libxl.c:4908:5: error: ‘r’ undeclared (first use in this function)
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Wei Liu [Thu, 29 Dec 2016 16:36:31 +0000 (16:36 +0000)]
libxl: fix libxl_set_memory_target
Commit 26dbc93a ("libxl: Remove pointless hypercall from
libxl_set_memory_target") removed the call to xc_domain_getinfolist, but
it failed to notice that "info" was actually needed later.
Put that back. While at it, make the code conform to coding style
requirement.
Roger Pau Monne [Mon, 19 Dec 2016 15:02:03 +0000 (15:02 +0000)]
init/FreeBSD: fix xencommons so it can only be launched by Dom0
At the moment the execution of xencommons is gated on the presence of the
privcmd device, but that's not correct, since privcmd is available to all Xen
domains (privileged or unprivileged). Instead of using privcmd use the
xenstored device, which will only be available to the domain that's in charge
of running xenstored, and thus xencommons.
Roger Pau Monne [Mon, 19 Dec 2016 15:02:01 +0000 (15:02 +0000)]
init/FreeBSD: set correct PATH for xl devd
FreeBSD init scripts don't have /usr/local/{bin/sbin} in it's PATH, which
prevents `xl devd` from working properly since hotplug scripts require the set
of xenstore cli tools to be in PATH.
While there also fix the usage of --pidfile, which according to the xl help
doesn't use "=", and add braces around XLDEVD_PIDFILE.
Julien Grall [Wed, 18 Jan 2017 18:54:08 +0000 (18:54 +0000)]
xen/arm: gic-v3: Make sure read from ICC_IAR1_EL1 is visible on the redistributor
"The effects of reading ICC_IAR0_EL1 and ICC_IAR1_EL1 on the state of a
returned INTID are not guaranteed to be visible until after the execution
of a DSB".
Because of the GIC is an external component, a dsb sy is required.
Without it the sysreg read may not have been made visible on the
redistributor.
Andrew Cooper [Wed, 18 Jan 2017 09:25:49 +0000 (10:25 +0100)]
x86/emul: Correct the return value handling of VMFUNC
The bracketing of x86_emulate() calling the ops->vmfunc() hook is wrong with
respect to the assignment to rc, which can trip the new assertions in
x86_emulate_wrapper().
The hvmemul_vmfunc() hook should only raise #UD if X86EMUL_EXCEPTION is
returned. This is only a latent bug at the moment.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 3ab1876504d409689824e161a8b04e57e1e5dd46
master date: 2016-12-22 13:32:46 +0000
Jan Beulich [Wed, 18 Jan 2017 09:24:22 +0000 (10:24 +0100)]
VT-d: correct dma_msi_set_affinity()
Commit 83cd2038fe ("VT-d: use msi_compose_msg()) together with 15aa6c6748 ("amd iommu: use base platform MSI implementation"),
introducing the use of a per-CPU scratch CPU mask, went too far:
dma_msi_set_affinity() may, at least in theory, be called in
interrupt context, and hence the use of that scratch variable is not
correct.
Since the function overwrites the destination information anyway,
allow msi_compose_msg() to be called with a NULL CPU mask, avoiding
the use of that scratch variable.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 7f885a1f49a75c770360b030666a5c1545156e5c
master date: 2016-12-16 14:33:43 +0100
Luwei Kang [Wed, 18 Jan 2017 09:23:33 +0000 (10:23 +0100)]
x86/VPMU: clear the overflow status of which counter happened to overflow
Just set the corresponding bits of counters which happened to overflow,
rather than setting all the available bits of IA32_PERF_GLOBAL_OVF_CTRL
when pmu interrupt happened.
Signed-off-by: Luwei Kang <luwei.kang@intel.com> Acked-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 7a0c70482580234868fcc53b8d72e31966dc7c52
master date: 2016-12-13 14:21:26 +0100
Andrew Cooper [Wed, 18 Jan 2017 09:21:34 +0000 (10:21 +0100)]
x86/vmx: Don't deliver #MC with an error code
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Kevin Tian <kevin.tian@intel.com>
master commit: 892d191df60806ea63f9d3c4fc37615bee028812
master date: 2016-11-25 10:48:20 +0000
Jan Beulich [Wed, 18 Jan 2017 09:19:03 +0000 (10:19 +0100)]
x86/EFI: meet further spec requirements for runtime calls
So far we didn't guarantee 16-byte alignment of the stack: While (so
far) we don't tell the compiler to use smaller alignment, we also don't
guarantee 16-byte alignment when establishing stack pointers for new
vCPU-s. Runtime service functions using SSE instructions may end with
#GP(0) without that.
Note that making use of -mpreferred-stack-boundary=3, as mentioned in
the comment, wouldn't help to reduce the needed alignment: The compiler
would then be free to align the stack of the function with the aligned
object, but would be permitted to place an odd number of 8-byte objects
there, resulting in the callee to still run on an unaligned stack.
(The only working alternative to the approach chosen here would be to
use -mincoming-stack-boundary=3, but that would affect all functions in
runtime.c, not just the ones actually making runtime services calls.
And it would still require the manual alignment logic here to be used
with gcc 5.2 and earlier - not permitting that command line option -,
just that then the alignment amount would become conditional.)
Hence enforce the needed alignment by making efi_rs_enter() return a
suitably aligned structure, which the caller then necessarily has to
store in a suitably aligned local variable, the address of which then
gets passed to efi_rs_leave(). Also (to limit exposure) move the
function declarations to where they belong: They're local to runtime.c,
and shared only with compat.c (by the latter including the former).
Furthermore we should avoid #MF to be raised on the FLDCW we do.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: f6b7fedc896250028cb81dafe9a3f6773aaf1da2
master date: 2016-11-22 13:52:53 +0100
Andrew Cooper [Wed, 18 Jan 2017 09:18:10 +0000 (10:18 +0100)]
x86/svm: Fix svm_nextrip_insn_length() when crossing the virtual boundary to 0
vmcb->nextrip can legitimately be less than vmcb->rip when execution wraps
back around to 0. Instead, complain if the reported length is greater than 15
and use x86_decode_insn() as a fallback.
While making changes here, fix two whitespace issues with the case labels.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
x86/hvm: Fix non-debug build folling c/s 0745f665a5
Andrew Cooper [Wed, 18 Jan 2017 09:17:43 +0000 (10:17 +0100)]
x86/traps: Don't call hvm_hypervisor_cpuid_leaf() for PV guests
Luckily, hvm_hypervisor_cpuid_leaf() and vmx_hypervisor_cpuid_leaf() are safe
to execute in the context of a PV guest, but HVM-specific feature flags
shouldn't be visible to PV guests.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 0f43883193da76fc928e836e319c3172f394e0f3
master date: 2016-11-16 10:33:18 +0000
Andrew Cooper [Wed, 18 Jan 2017 09:17:21 +0000 (10:17 +0100)]
x86/vmx: Correct the long mode check in vmx_cpuid_intercept()
%cs.L may be set in a legacy mode segment, or clear in a compatibility mode
segment; it is not the correct way to check for long mode being active.
Both of these situations result in incorrect visibility of the SYSCALL feature
in CPUID, and by extension, incorrect behaviour in hvm_efer_valid().
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Kevin Tian <kevin.tian@intel.com>
master commit: fcb618c025f9251d7e22138f6528595037252c21
master date: 2016-11-16 10:32:54 +0000
Andrew Cooper [Wed, 18 Jan 2017 09:16:52 +0000 (10:16 +0100)]
x86/svm: Don't clobber eax and edx if an RDMSR intercept fails
The original code has a bug; eax and edx get unconditionally updated even when
hvm_msr_read_intercept() doesn't return X86EMUL_OKAY.
It is only by blind luck (vmce_rdmsr() eagerly initialising its msr_content
pointer) that this isn't an information leak into guests.
While fixing this bug, reduce the scope of msr_content and initialise it to 0.
This makes it obvious that a stack leak won't occur, even if there were to be
a buggy codepath in hvm_msr_read_intercept().
Also make some non-functional improvements. Make the insn_len calculation
common, and reduce the quantity of explicit casting by making better use of
the existing register names.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
master commit: a0b4e3c0681a11b765fe218fba0ba4ebb9fa56c5
master date: 2016-11-10 15:34:42 +0000
Andrew Cooper [Wed, 18 Jan 2017 09:15:22 +0000 (10:15 +0100)]
x86/emul: Correct the decoding of SReg3 operands
REX.R is ignored when considering segment register operands, and needs masking
out first.
While fixing this, reorder the user segments in x86_segment to match SReg3
encoding. This avoids needing a translation table between hardware ordering
and Xen's ordering.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
VMX: fix realmode emulation SReg handling
Commit 0888d36bb2 ("x86/emul: Correct the decoding of SReg3 operands")
overlooked three places where x86_seg_cs was assumed to be zero.
Jan Beulich [Wed, 21 Dec 2016 16:45:05 +0000 (17:45 +0100)]
x86: force EFLAGS.IF on when exiting to PV guests
Guest kernels modifying instructions in the process of being emulated
for another of their vCPU-s may effect EFLAGS.IF to be cleared upon
next exiting to guest context, by converting the being emulated
instruction to CLI (at the right point in time). Prevent any such bad
effects by always forcing EFLAGS.IF on. And to cover hypothetical other
similar issues, also force EFLAGS.{IOPL,NT,VM} to zero.
This is CVE-2016-10024 / XSA-202.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 0e47f92b072548800223f9a21ea051a017173915
master date: 2016-12-21 16:46:13 +0100
Jan Beulich [Tue, 13 Dec 2016 13:27:24 +0000 (14:27 +0100)]
x86emul: CMPXCHG8B ignores operand size prefix
Otherwise besides mis-handling the instruction, the comparison failure
case would result in uninitialized stack data being handed back to the
guest in rDX:rAX (32 bits leaked for 32-bit guests, 96 bits for 64-bit
ones).
arm64: fix incorrect memory region size in TCR_EL2
The maximum and minimum values for TxSZ depend on level of
translation as per AArch64 Virtual Memory System Architecture.
According to ARM specification DDI0487A_h (sec D4.2.2, page 1752),
the minimum TxSZ value is 16. If TxSZ is programmed to a value
smaller than 16 then it is IMPLEMENTATION DEFINED.
This patch sets T0SZ to (64-48)bits since XEN uses all 4 levels
to cover 48bit (256TB) virtual address instead of value zero.
Wei Chen [Tue, 29 Nov 2016 15:12:39 +0000 (16:12 +0100)]
arm32: handle async aborts delivered while at HYP
If guest generates an asynchronous abort and then traps into HYP
(by HVC or IRQ) before the abort has been delivered, the hypervisor
could not catch it, because the PSTATE.A bit is masked all the time
in hypervisor. So this asynchronous abort may be slipped to next
running guest with PSTATE.A bit unmasked.
In order to avoid this, it is necessary to take the abort at HYP, by
clearing the PSTATE.A bit. In this patch, we unmask the PSTATE.A bit
to open a window to catch guest-generated asynchronous abort in all
Guest -> HYP switch paths. If we caught such asynchronous abort in
checking window, the HYP data abort exception will be triggered and
the abort source guest will be crashed.
Wei Chen [Tue, 29 Nov 2016 15:12:05 +0000 (16:12 +0100)]
arm64: handle async aborts delivered while at EL2
If EL1 generates an asynchronous abort and then traps into EL2
(by HVC or IRQ) before the abort has been delivered, the hypervisor
could not catch it, because the PSTATE.A bit is masked all the time
in hypervisor. So this asynchronous abort may be slipped to next
running guest with PSTATE.A bit unmasked.
In order to avoid this, it is necessary to take the abort at EL2, by
clearing the PSTATE.A bit. In this patch, we unmask the PSTATE.A bit
to open a window to catch guest-generated asynchronous abort in all
EL1 -> EL2 swich paths. If we catched such asynchronous abort in
checking window, the hyp_error exception will be triggered and the
abort source guest will be crashed.
In current code, when the hypervisor receives an asynchronous abort
from a guest, the hypervisor will do panic, the host will be down.
We have to prevent such security issue, so, in this patch we crash
the guest, when the hypervisor receives an asynchronous abort from
the guest.
Ian Jackson [Tue, 22 Nov 2016 13:23:54 +0000 (14:23 +0100)]
pygrub: Properly quote results, when returning them to the caller:
* When the caller wants sexpr output, use `repr()'
This is what Xend expects.
The returned S-expressions are now escaped and quoted by Python,
generally using '...'. Previously kernel and ramdisk were unquoted
and args was quoted with "..." but without proper escaping. This
change may break toolstacks which do not properly dequote the
returned S-expressions.
* When the caller wants "simple" output, crash if the delimiter is
contained in the returned value.
With --output-format=simple it does not seem like this could ever
happen, because the bootloader config parsers all take line-based
input from the various bootloader config files.
With --output-format=simple0, this can happen if the bootloader
config file contains nul bytes.
This is CVE-2016-9379 and CVE-2016-9380 / XSA-198.
Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com> Tested-by: Ian Jackson <Ian.Jackson@eu.citrix.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 27e14d346ed6ff1c3a3cfc479507e62d133e92a9
master date: 2016-11-22 13:52:09 +0100
Andrew Cooper [Tue, 22 Nov 2016 13:23:29 +0000 (14:23 +0100)]
x86/svm: fix injection of software interrupts
The non-NextRip logic in c/s 36ebf14eb "x86/emulate: support for emulating
software event injection" was based on an older version of the AMD software
manual. The manual was later corrected, following findings from that series.
I took the original wording of "not supported without NextRIP" to mean that
X86_EVENTTYPE_SW_INTERRUPT was not eligible for use. It turns out that this
is not the case, and the new wording is clearer on the matter.
Despite testing the original patch series on non-NRip hardware, the
swint-emulation XTF test case focuses on the debug vectors; it never ended up
executing an `int $n` instruction for a vector which wasn't also an exception.
During a vmentry, the use of X86_EVENTTYPE_HW_EXCEPTION comes with a vector
check to ensure that it is only used with exception vectors. Xen's use of
X86_EVENTTYPE_HW_EXCEPTION for `int $n` injection has always been buggy on AMD
hardware.
Fix this by always using X86_EVENTTYPE_SW_INTERRUPT.
Print and decode the eventinj information in svm_vmcb_dump(), as it has
several invalid combinations which cause vmentry failures.
This is CVE-2016-9378 / part of XSA-196.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 920edccd41db6cb0145545afa1850edf5e7d098e
master date: 2016-11-22 13:51:16 +0100
Andrew Cooper [Tue, 22 Nov 2016 13:23:05 +0000 (14:23 +0100)]
x86/emul: correct the IDT entry calculation in inject_swint()
The logic, as introduced in c/s 36ebf14ebe "x86/emulate: support for emulating
software event injection" is buggy. The size of an IDT entry depends on long
mode being active, not the width of the code segment currently in use.
In particular, this means that a compatibility code segment which hits
emulation for software event injection will end up using an incorrect offset
in the IDT for DPL/Presence checking. In practice, this only occurs on old
AMD hardware lacking NRip support; all newer AMD hardware, and all Intel
hardware bypass this path in the emulator.
While here, fix a minor issue with reading the IDT entry. The return value
from ops->read() wasn't checked, but in reality the only failure case is if a
pagefault occurs. This is not a realistic problem as the kernel will almost
certainly crash with a double fault if this setup actually occured.
This is CVE-2016-9377 / part of XSA-196.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 255e8fe95f22ded5186fd75244ffcfb9d5dbc855
master date: 2016-11-22 13:50:49 +0100
Jan Beulich [Tue, 22 Nov 2016 13:22:37 +0000 (14:22 +0100)]
x86emul: fix huge bit offset handling
We must never chop off the high 32 bits.
This is CVE-2016-9383 / XSA-195.
Reported-by: George Dunlap <george.dunlap@citrix.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 1c6c2d60d205f71ede0fbbd9047e459112f576db
master date: 2016-11-22 13:49:06 +0100
Jan Beulich [Tue, 22 Nov 2016 13:22:09 +0000 (14:22 +0100)]
x86/PV: writes of %fs and %gs base MSRs require canonical addresses
Commit c42494acb2 ("x86: fix FS/GS base handling when using the
fsgsbase feature") replaced the use of wrmsr_safe() on these paths
without recognizing that wr{f,g}sbase() use just wrmsrl() and that the
WR{F,G}SBASE instructions also raise #GP for non-canonical input.
Similarly arch_set_info_guest() needs to prevent non-canonical
addresses from getting stored into state later to be loaded by context
switch code. For consistency also check stack pointers and LDT base.
DR0..3, otoh, already get properly checked in set_debugreg() (albeit
we discard the error there).
The SHADOW_GS_BASE check isn't strictly necessary, but I think we
better avoid trying the WRMSR if we know it's going to fail.
This is CVE-2016-9385 / XSA-193.
Reported-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: f3fa3abf3e61fb1f25ce721e14ac324dda67311f
master date: 2016-11-22 13:46:28 +0100
Jan Beulich [Tue, 22 Nov 2016 13:21:34 +0000 (14:21 +0100)]
x86/HVM: don't load LDTR with VM86 mode attrs during task switch
Just like TR, LDTR is purely a protected mode facility and hence needs
to be loaded accordingly. Also move its loading to where it
architecurally belongs.
This is CVE-2016-9382 / XSA-192.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 93aa42b85ae0084ba7b749d0e990c94fbf0c17e3
master date: 2016-11-22 13:45:44 +0100
Andrew Cooper [Tue, 22 Nov 2016 13:20:58 +0000 (14:20 +0100)]
x86/hvm: Fix the handling of non-present segments
In 32bit, the data segments may be NULL to indicate that the segment is
ineligible for use. In both 32bit and 64bit, the LDT selector may be NULL to
indicate that the entire LDT is ineligible for use. However, nothing in Xen
actually checks for this condition when performing other segmentation
checks. (Note however that limit and writeability checks are correctly
performed).
Neither Intel nor AMD specify the exact behaviour of loading a NULL segment.
Experimentally, AMD zeroes all attributes but leaves the base and limit
unmodified. Intel zeroes the base, sets the limit to 0xfffffff and resets the
attributes to just .G and .D/B.
The use of the segment information in the VMCB/VMCS is equivalent to a native
pipeline interacting with the segment cache. The present bit can therefore
have a subtly different meaning, and it is now cooked to uniformly indicate
whether the segment is usable or not.
GDTR and IDTR don't have access rights like the other segments, but for
consistency, they are treated as being present so no special casing is needed
elsewhere in the segmentation logic.
AMD hardware does not consider the present bit for %cs and %tr, and will
function as if they were present. They are therefore unconditionally set to
present when reading information from the VMCB, to maintain the new meaning of
usability.
Intel hardware has a separate unusable bit in the VMCS segment attributes.
This bit is inverted and stored in the present field, so the hvm code can work
with architecturally-common state.
This is CVE-2016-9386 / XSA-191.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 04beafa8e6c66f5cd814c00e2d2b51cfbc41cb8a
master date: 2016-11-22 13:44:50 +0100
Jan Beulich [Tue, 25 Oct 2016 15:18:41 +0000 (17:18 +0200)]
x86/Viridian: don't depend on undefined register state
The high halves of all GPRs are undefined in 32-bit and compat modes,
and the dependency is being obfuscated by our structure field names not
matching architectural register names (it was actually while putting
together a patch to correct this when I noticed the issue here).
For consistency also use the architecturally correct names on the
output side.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
master commit: a709a3a646302e95ba42beac89264f6cdacd0c64
master date: 2016-10-14 14:09:42 +0200
Jan Beulich [Tue, 25 Oct 2016 15:18:16 +0000 (17:18 +0200)]
x86emul: fix pushing of selector registers
Both explicit PUSH and far CALL currently push unrelated data (the
segment attributes word) in the high half (attributes and limit in the
64-bit case in the high 48 bits) instead of zero. To avoid having to
apply this and further changes in multiple places, also fold the two
(respectively) far call/jmp instances into one.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 373923ed9c2ed36925f574387db2be2ebe5ce45a
master date: 2016-10-14 14:09:16 +0200
Changeset fbf96e6, "xentrace: correct formula to calculate
t_info_pages", broke the trace metadata page count calculation, by
mistaking t_info_first_offset as denominated in bytes, when in fact it
is denominated in words (uint32_t).
Effectively revert that change, and put a comment there to reduce the
chance that someone will make that mistake in the future.
Reviewed-by: Igor Druzhinin <igor.druzhinin@citrix.com> Tested-by: Igor Druzhinin <igor.druzhinin@citrix.com> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
master commit: 71b8b46111219a2f83f4f9ae06ac5409744ea86e
master date: 2016-10-11 14:23:52 +0100
Jan Beulich [Tue, 25 Oct 2016 15:16:30 +0000 (17:16 +0200)]
x86: defer not-present segment checks
Following on from commits 5602e74c60 ("x86emul: correct loading of
%ss") and bdb860d01c ("x86/HVM: correct segment register loading during
task switch") the point of the non-.present checks needs to be refined:
#NP (and its #SS companion), other than suggested by the various
instruction pages in Intel's SDM, gets checked for only after all type
and permission checks. The only checks getting done even later are the
long mode specific ones for system descriptors (which we don't support
yet) and 64-bit code segments (i.e. anything touching other than the
attribute byte).
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
x86/hvm: Correct the position of the %cs L/D checks
Contrary to the description in the software manuals, in Long Mode, attempts to
load %cs check that D is not set in combination with L before the present flag
is checked.
This can be observed because the L/D check fails with #GP before the presence
check failes with #NP.
This change partially reverts c/s 78ff18c90 "x86: defer not-present segment
checks", taking it back to how it was in the v1 submission.
Dario Faggioli [Tue, 25 Oct 2016 15:15:35 +0000 (17:15 +0200)]
xen: credit1: return the 'time remaining to the limit' as next timeslice.
If vcpu x has run for 200us, and sched_ratelimit_us is
1000us, continue running x _but_ return 1000us-200us as
the next time slice. This way, next scheduling point will
happen in 800us, i.e., exactly at the point when x crosses
the threshold, and can be descheduled (if appropriate).
Right now (without this patch), we're always returning
sched_ratelimit_us (1000us, in the example above), which
means we're (potentially) allowing x to run more than
it should have been able to.
Note that, however, in order to avoid setting timers to very
short intervals, which is part of the purpose of rate limiting,
we never use a time slice smaller than a well defined threshold.
Such threshold (CSCHED_MIN_TIMER defined in this patch) is, in
general independent from rate limiting, but it looks a good idea
to set it to the minimum possible ratelimiting value.
Jan Beulich [Tue, 4 Oct 2016 13:05:41 +0000 (14:05 +0100)]
x86emul: honor guest CR0.TS and CR0.EM
We must not emulate any instructions accessing respective registers
when either of these flags is set in the guest view of the register, or
else we may do so on data not belonging to the guest's current task.
Being architecturally required behavior, the logic gets placed in the
instruction emulator instead of hvmemul_get_fpu(). It should be noted,
though, that hvmemul_get_fpu() being the only current handler for the
get_fpu() callback, we don't have an active problem with CR4: Both
CR4.OSFXSR and CR4.OSXSAVE get handled as necessary by that function.
This is XSA-190.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
AMD F12h machines have an erratum which can cause DIV/IDIV to behave
unpredictably. The workaround is to set MSRC001_1029[31] but sometimes
there is no BIOS update containing that workaround so let's do it
ourselves unconditionally. It is simple enough.
Jan Beulich [Wed, 28 Sep 2016 15:03:13 +0000 (17:03 +0200)]
x86emul: correct loading of %ss
- Instead of #NP, #SS needs to be raised for non-present descriptors.
- Loading a null selector is fine in 64-bit mode at CPL != 3, as long
as RPL == CPL.
- Don't lose the low two selector bits on null selector loads (also
applies to %ds, %es, %fs, %gs, and LDTR).
Since we need CPL earlier now, also switch to using get_cpl() (instead
of open coding it).
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 5602e74c60c1ec2feef4cdd75376e4b1a1d7e681
master date: 2016-09-26 17:26:21 +0200
Jan Beulich [Wed, 28 Sep 2016 15:02:47 +0000 (17:02 +0200)]
x86/Intel: hide CPUID faulting capability from guests
We don't currently emulate it, so guests should not be misguided to
believe they can (try to) use it.
For now, simply return zero to guests for platform MSR reads, and only
accept (by discarding) writes of zero. If ever there will be bits we
can safely expose to guests, let's handle them by white listing.
(As a side note - according to SDM version 059 bit 31 is reserved on
all known families.)
Reported-by: Kyle Huey <me@kylehuey.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citix.com> Acked-by: Kevin Tian <kevin.tian@intel.com>
master commit: b982a5bea4273a4b9fc007d5046bed8d1669c07f
master date: 2016-09-19 11:37:09 +0200
xen: credit2: properly schedule migration of a running vcpu.
If wanting to migrate a vcpu that is actually running,
we need to ask the scheduler to chime in as soon as
possible, to have the vcpu itself stopped and actually
moved.
Make sure this happens by, after setting all the relevant
flags, raising the scheduler softirq.
xen: credit1: fix mask to be used for tickling in Credit1
If there are idle pcpus inside the waking vcpu's
soft-affinity mask, we should really tickle one
of them (this is one of the purposes of the
__runq_tickle() function itself!), not just
any idle pcpu.
The issue has been introduced in 02ea5031825d
("credit1: properly deal with pCPUs not in any cpupool"),
where the usage of idle_mask is changed, without
updating the bottom of the function, where it
is also referenced.