Ian Jackson [Wed, 27 Apr 2016 15:08:49 +0000 (16:08 +0100)]
libxl: Do not trust frontend for disk eject event
Use the /libxl path for interpreting disk eject watch events: do not
read the backend path out of the frontend. Instead, use the version
in /libxl. That avoids us relying on the guest-modifiable
$frontend/backend pointer.
To implement this we store the path
/libxl/$guest/device/vbd/$devid/backend
in the evgen structure.
This is part of XSA-175.
Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com> Reviewed-by: Wei Liu <wei.liu2@citrix.com>
Ian Jackson [Tue, 3 May 2016 17:39:36 +0000 (18:39 +0100)]
libxl: Do not trust frontend in libxl__devices_destroy
We need to enumerate the devices we have provided to a domain, without
trusting the guest-writeable (or, at least, guest-deletable) frontend
paths.
Instead, enumerate via, and read the backend path from, /libxl.
The console /libxl path is regular, so the special case for console 0
is not relevant any more: /libxl/GUEST/device/console/0 will be found,
and then libxl__device_destroy will DTRT to the right frontend path.
This is part of XSA-175.
Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com> Reviewed-by: Wei Liu <wei.liu2@citrix.com>
Ian Jackson [Wed, 27 Apr 2016 15:34:19 +0000 (16:34 +0100)]
libxl: Provide libxl__backendpath_parse_domid
Multiple places in libxl need to figure out the backend domid of a
device. This can be discovered easily by looking at the backend path,
which always starts /local/domain/$backend_domid/.
There are no call sites yet.
This is part of XSA-175.
Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com> Reviewed-by: Wei Liu <wei.liu2@citrix.com>
Ian Jackson [Mon, 16 May 2016 13:56:57 +0000 (14:56 +0100)]
libxl: Record backend/frontend paths in /libxl/$DOMID
This gives us a record of all the backends we have set up for a
domain, which is separate from the frontends in
/local/domain/$DOMID/device.
In particular:
1. A guest has write permission for the frontend path:
/local/domain/$DOMID/device/$KIND/$DEVID
which means that the guest can completely delete the frontend.
(They can't recreate it because they don't have write permission
on the containing directory.)
2. A guest has write permission for the backend path recorded in the
frontend, ie, it can write to
/local/domain/$DOMID/device/$KIND/$DEVID/backend
which means that the guest can break the association between
frontend and backend.
So we can't rely on iterating over the frontends to find all the
backends, or examining a frontend to discover how a device is
configured.
So, have libxl__device_generic_add record the frontend and backend
paths in /libxl/$DOMID/device, and have libxl__device_destroy remove
them again.
Create the containing directory /libxl/GUEST/device in
libxl__domain_make. The already existing xs_rm in devices_destroy_cb
will take care of removing it.
This is part of XSA-175.
Backport note: Backported over 7472ced, which fixes a bug in driver
domain teardown.
Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com> Reviewed-by: Wei Liu <wei.liu2@citrix.com>
Andrew Cooper [Thu, 2 Jun 2016 13:19:00 +0000 (14:19 +0100)]
xen/arm: Don't free p2m->root in p2m_teardown() before it has been allocated
If p2m_init() didn't complete successfully, (e.g. due to VMID
exhaustion), p2m_teardown() is called and unconditionally tries to free
p2m->root before it has been allocated. free_domheap_pages() doesn't
tolerate NULL pointers.
This is XSA-181
Reported-by: Aaron Cornelius <Aaron.Cornelius@dornerworks.com> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Julien Grall <julien.grall@arm.com>
Dario Faggioli [Fri, 27 May 2016 12:50:19 +0000 (14:50 +0200)]
sched: avoid races on time values read from NOW()
or (even in cases where there is no race, e.g., outside
of Credit2) avoid using a time sample which may be rather
old, and hence stale.
In fact, we should only sample NOW() from _inside_
the critical region within which the value we read is
used. If we don't, in case we have to spin for a while
before entering the region, when actually using it:
1) we will use something that, at the veryy least, is
not really "now", because of the spinning,
2) if someone else sampled NOW() during a critical
region protected by the lock we are spinning on,
and if we compare the two samples when we get
inside our region, our one will be 'earlier',
even if we actually arrived later, which is a
race.
In Credit2, we see an instance of 2), in runq_tickle(),
when it is called by csched2_context_saved() as it samples
NOW() before acquiring the runq lock. This makes things
look like the time went backwards, and it confuses the
algorithm (there's even a d2printk() about it, which would
trigger all the time, if enabled).
In RTDS, something similar happens in repl_timer_handler(),
and there's another instance in schedule() (in generic code),
so fix these cases too.
While there, improve csched2_vcpu_wake() and and rt_vcpu_wake()
a little as well (removing a pointless initialization, and
moving the sampling a bit closer to its use). These two hunks
entail no further functional changes.
Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com> Reviewed-by: George Dunlap <george.dunlap@citrix.com> Reviewed-by: Meng Xu <mengxu@cis.upenn.edu>
RTDS: fix another instance of the 'read NOW()' race
which was overlooked in 779511f4bf5ae ("sched: avoid
races on time values read from NOW()").
Andrew Cooper [Fri, 27 May 2016 12:49:28 +0000 (14:49 +0200)]
xen/nested_p2m: Don't walk EPT tables with a regular PT walker
hostmode->p2m_ga_to_gfn() is a plain PT walker, and is not appropriate for a
general L1 p2m walk. It is fine for AMD as NPT share the same format as
normal pagetables. For Intel EPT however, it is wrong.
The translation ends up correct (as the formats are sufficiently similar), but
the control bits in lower 12 bits differ in meaning. A plain PT walker sets
A/D bits (bits 5 and 6) as it walks, but in EPT tables, these are the IPAT and
top bit of EMT (caching type). This in turn causes problem when the EPT
tables are subsequently used.
Replace hostmode->p2m_ga_to_gfn() with nestedhap_walk_L1_p2m() in
paging_gva_to_gfn(), which is the correct function for the task. This
involves making nestedhap_walk_L1_p2m() non-static, and adding
vmx_vmcs_enter/exit() pairs to nvmx_hap_walk_L1_p2m() as it is now reachable
from contexts other than v == current.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Acked-by: George Dunlap <george.dunlap@citrix.com>
master commit: bab2bd8e222de9e596699ac080ea985af828c4c4
master date: 2016-05-18 18:22:06 +0100
Jan Beulich [Fri, 27 May 2016 12:48:58 +0000 (14:48 +0200)]
x86/PoD: skip eager reclaim when possible
Reclaiming pages is pointless when the cache can already satisfy all
outstanding PoD entries, and doing reclaims in that case can be very
harmful to performance when that memory gets used by the guest, but
only to store zeroes there.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Release-acked-by: Wei Liu <wei.liu2@citrix.com> Reviewed-by: George Dunlap <george.dunlap@citrix.com>
master commit: 556c69f4efb09dd06e6bce4cbb0455287f19d02e
master date: 2016-05-12 18:02:21 +0200
Jan Beulich [Fri, 27 May 2016 12:48:00 +0000 (14:48 +0200)]
IOMMU/x86: per-domain control structure is not HVM-specific
... and hence should not live in the HVM part of the PV/HVM union. In
fact it's not even architecture specific (there already is a per-arch
extension type to it), so it gets moved out right to common struct
domain.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Julien Grall <julien.grall@arm.com> Reviewed-by: Wei Liu <wei.liu2@citrix.com>
master commit: af07377007d595b5d6422291bb1c932c16d1036f
master date: 2016-05-04 09:44:32 +0200
Jan Beulich [Fri, 27 May 2016 12:47:08 +0000 (14:47 +0200)]
x86: use optimal NOPs to fill the SMEP/SMAP placeholders
Alternatives patching code picks the most suitable NOPs for the
running system, so simply use it to replace the pre-populated ones.
Use an arbitrary, always available feature to key off from, but
hide this behind the new X86_FEATURE_ALWAYS.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
x86/compat: correct SMEP/SMAP NOPs patching
Correct the number of single byte NOPs we want to be replaced in case
neither SMEP nor SMAP are available.
Also simplify the expression adding these NOPs - at that location .
equals .Lcr4_orig, and removing that part of the expression fixes a
bogus ".space or fill with negative value, ignored" warning by very old
gas (which actually is what made me look at those constructs again).
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Wei Liu <wei.liu2@citrix.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 01a0bd0a7d72be638a359db3f8cf551123467d29
master date: 2016-05-13 18:15:55 +0100
master commit: f5610009529628314c9d1d52b00715fe855fcf06
master date: 2016-05-26 17:26:24 +0100
Jan Beulich [Fri, 27 May 2016 12:46:31 +0000 (14:46 +0200)]
x86: suppress SMEP and SMAP while running 32-bit PV guest code
Since such guests' kernel code runs in ring 1, their memory accesses,
at the paging layer, are supervisor mode ones, and hence subject to
SMAP/SMEP checks. Such guests cannot be expected to be aware of those
two features though (and so far we also don't expose the respective
feature flags), and hence may suffer page faults they cannot deal with.
While the placement of the re-enabling slightly weakens the intended
protection, it was selected such that 64-bit paths would remain
unaffected where possible. At the expense of a further performance hit
the re-enabling could be put right next to the CLACs.
Note that this introduces a number of extra TLB flushes - CR4.SMEP
transitioning from 0 to 1 always causes a flush, and it transitioning
from 1 to 0 may also do.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
x86/compat: Cleanup and further debugging of SMAP/SMEP fixup
* Abstract (X86_CR4_SMEP | X86_CR4_SMAP) behind XEN_CR4_PV32_BITS to avoid
opencoding the invidial bits which are fixed up behind a 32bit PV guests
back.
* Show cr4_pv32_mask in the BUG register dump
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Wei Liu <wei.liu2@citrix.com>
x86: refine debugging of SMEP/SMAP fix
Instead of just latching cr4_pv32_mask into %rdx, correct the found
wrong value in %cr4 (to avoid triggering another BUG).
Also there is one more place for XEN_CR4_PV32_BITS to be used.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
x86: make SMEP/SMAP suppression tolerate NMI/MCE at the "wrong" time
There is one instruction boundary where any kind of interruption would
break the assumptions cr4_pv32_restore's debug mode checking makes on
the correlation between the CR4 register value and its in-memory cache.
Correct this (see the code comment) even in non-debug mode, or else
a subsequent cr4_pv32_restore would also be misguided into thinking the
features are enabled when they really aren't.
Jan Beulich [Fri, 27 May 2016 12:44:09 +0000 (14:44 +0200)]
x86/P2M: consolidate handling of types not requiring a valid MFN
As noted regarding the mixture of checks in p2m_pt_set_entry(),
introduce a new P2M type group allowing to be used everywhere we
just care about accepting operations with either a valid MFN or a type
permitting to be used without (valid) MFN.
Note that p2m_mmio_dm is not included in P2M_NO_MFN_TYPES, as for the
intended purpose that one ought to be treated similar to p2m_invalid
(perhaps the two should ultimately get folded anyway).
Note further that PoD superpages now get INVALID_MFN used when creating
page table entries (was _mfn(0) before).
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: George Dunlap <george.dunlap@citrix.com>
master commit: c35eefded2992fc9b979f99190422527650872fd
master date: 2015-11-20 12:38:33 +0100
Julien Grall [Fri, 20 May 2016 13:37:42 +0000 (14:37 +0100)]
xen/arm: p2m: Release the p2m lock before undoing the mappings
Since commit 4b25423a "arch/arm: unmap partially-mapped memory regions",
Xen has been undoing the P2M mappings when an error occurred during
insertion or memory allocation.
This is done by calling recursively apply_p2m_changes, however the
second call is done with the p2m lock taken which will result in a
deadlock for the current processor.
The p2m lock is here to protect 2 threads modifying concurrently the
page tables. However, it does not guarantee the ordering of the
changes. I.e if 2 threads request change on regions that overlaps,
then the result is undefined.
Therefore it is fine to move the recursive call to undo the changes
after the lock is released.
Julien Grall [Fri, 20 May 2016 13:37:41 +0000 (14:37 +0100)]
xen/arm: p2m: apply_p2m_changes: Do not undo more than necessary
Since commit 4b25423a "arch/arm: unmap partially-mapped memory regions",
Xen has been undoing the P2M mappings when an error occurred during
insertion or memory allocation.
The function apply_p2m_changes can work with region not-aligned to a
block size (2MB, 1G) or page size (4K). The mapping will be done by
splitting the region in a set of regions aligned to the size supported
by the page table.
The mapping of a region could fail when it is not possible to allocate
memory for an intermediate table (i.e a new or when shattering a block).
When the mapping is undone, the end of the region is computed using the
base address of the current region and the size of the failing level.
However the failing level may not be the leaf one, therefore unrelated
entries will be removed.
Fix it by removing the mapping from the start address up to the last
region that has been successfully mapped.
Wei Liu [Tue, 17 May 2016 14:40:32 +0000 (16:40 +0200)]
libxl: fix old style declarations
Fix errors like:
/local/work/xen.git/dist/install/usr/local/include/libxl_uuid.h:59:1: error: 'static' is not at beginning of declaration [-Werror=old-style-declaration]
void static inline libxl_uuid_copy_0x040400(libxl_uuid *dst,
^
/local/work/xen.git/dist/install/usr/local/include/libxl_uuid.h:59:1: error: 'inline' is not at beginning of declaration [-Werror=old-style-declaration]
/local/work/xen.git/dist/install/usr/local/include/libxl.h:1233:1: error: 'static' is not at beginning of declaration [-Werror=old-style-declaration]
int static inline libxl_domain_create_restore_0x040200(
^
Signed-off-by: Wei Liu <wei.liu2@citrix.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
master commit: d5b6844942f7b21b24e92bccd85c1249592315c8
master date: 2016-04-20 14:34:04 +0100
Jan Beulich [Tue, 17 May 2016 12:55:32 +0000 (14:55 +0200)]
x86/mm: fully honor PS bits in guest page table walks
In L4 entries it is currently unconditionally reserved (and hence
should, when set, always result in a reserved bit page fault), and is
reserved on hardware not supporting 1Gb pages (and hence should, when
set, similarly cause a reserved bit page fault on such hardware).
This is CVE-2016-4480 / XSA-176.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 46699c7393bd991234b5642763c5c24b6b39a6c4
master date: 2016-05-17 14:41:14 +0200
xen/arm64: ensure that the correct SP is used for exceptions
The ARMv8 architecture has a SPSel ("stack pointer selection") machine
register that allows us to determine which exception level's stack
pointer is loaded when an exception occurs. As we don't want to
use the non-privileged SP_EL0 stack pointer -- or even assume that SP_EL0
points to a valid address in the hypervisor context-- we'll need to ensure
that our EL2 code sets the SPSel to SP_ELn mode, so exceptions that trap
to EL2 use the EL2 stack pointer.
This corrects an issue that can manifest as a hang-on-IRQ on some
arm64 cores if the firmware/bootloader has previously initialized SPSel
to 0; in which case Xen's exceptions will incorrectly use an invalid SP_EL0,
and will endlessly spin on the synchronous abort handler.
Vikram Sethi [Tue, 29 Mar 2016 04:46:12 +0000 (23:46 -0500)]
arm: Fix asynchronous aborts (SError exceptions) due to bogus PTEs
ARMv8 architecture allows performing prefetch data/instructions
from memory locations marked as normal memory. Prefetch does not
mean that the data/instruction has to be used/executed in code
flow. All PTEs that appear to be valid to MMU must contain valid
physical address with proper attributes otherwise MMU table walk
might cause imprecise asynchronous aborts.
The way current XEN code is preparing page tables for frametable
and xenheap memory can create bogus PTEs. This patch fixes the
issue by clearing page table memory before populating EL2 L0/L1
PTEs. Without this patch XEN crashes on Qualcomm Technologies
server chips due to asynchronous aborts.
The speculative/prefetch feature explanation is scattered everywhere
in ARM specification but below two sections have useful information.
E2.8 Memory types and attributes (ver DDI0487A_h)
G4.12.6 External abort on a translation table walk (ver DDI0487A_h)
xen/arm: Force broadcast of TLB and instruction cache maintenance instructions
UP guest may use TLB instructions to flush only on the local CPU.
Therefore, TLB flush will not be broadcasted across all the CPUs within
the same innershareable domain.
When the vCPU is migrated between different CPUs, it may be rescheduled
to a previous CPU where the TLB has not been flushed. The TLB may
contain stale entries which will result to translate incorrectly a VA to
IPA or even cause TLB conflicts.
To avoid a such situation, it is possible to set HCR_EL2.FB, which will
force the broadcast of TLB and instruction cache maintenance instructions.
The performance impact of setting HCR_EL2.FB will depend on how often
a guest makes use of local flush instructions.
ARM64 Linux kernel is SMP-aware (no possibility to build only for UP).
Most of the flush instructions are innershareable. The local flushes are
limited to the boot (1 per CPU) and when a task is getting a new ASIC.
Therefore the impact of setting HCR.FB for those guests is very limited.
ARM32 Linux kernel offers the possibility to be built either for SMP or
UP. The number of local flush is very limited in the former kernel
whilst the latter will only issue local flushes. Therefore there will be
an impact to set HCR.FB for guest kernel only built for UP.
Note that the SMP kernel can run in a domain using 1 vCPU and it
will still make use of innershareable flush instruction.
Looking at other OSes, such as FreeBSD, they are very similar to ARM32
Linux kernel (i.e offering two configuration: SMP and UP).
However, nothing prevents an SMP-aware kernel to make more often use of
local flush instrutions.
In the case that HCR_EL2.FB is not set, Xen would need to:
* Flush all the TLBs for the VMID associated to this domain
* Invalidate all the entries from the branch predictor
* Invalidate all the entries from the instruction cache
Those actions would only be needed when the vCPU is migrating between 2
physical CPUs.
Whilst this solution would have a negative performance impact on kernels
which do not heavily use local flush instructions, this may improve
performance for kernels only built for UP system.
For now implement the easiest solution (i.e setting HCR_EL2.FB). We can
revisit it if the performance impact is too high for UP kernel.
Jan Beulich [Mon, 9 May 2016 11:16:10 +0000 (13:16 +0200)]
x86/shadow: account for ioreq server pages before complaining about not found mapping
prepare_ring_for_helper(), just like share_xen_page_with_guest(),
takes a write reference on the page, and hence should similarly be
accounted for when determining whether to log a complaint.
This requires using recursive locking for the ioreq server lock, as the
offending invocation of sh_remove_all_mappings() is down the call stack
from hvm_set_ioreq_server_state(). (While not strictly needed to be
done in all other instances too, convert all of them for consistency.)
At once improve the usefulness of the shadow error message: Log all
values involved in triggering it as well as the GFN (to aid
understanding which guest page it is that there is a problem with - in
cases like the one here the GFN is invariant across invocations, while
the MFN obviously can [and will] vary).
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Paul Durrant <paul.durrant@citrix.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Tim Deegan <tim@xen.org>
master commit: 77eb5dbeff78bbe549793325520f59ab46a187f8
master date: 2016-05-02 09:20:17 +0200
Jan Beulich [Mon, 9 May 2016 11:15:14 +0000 (13:15 +0200)]
x86/time: fix gtime_to_gtsc for vtsc=1 PV guests
For vtsc=1 PV guests, rdtsc is trapped and calculated from get_s_time()
using gtime_to_gtsc. Similarly the tsc_timestamp, part of struct
vcpu_time_info, is calculated from stime_local_stamp using
gtime_to_gtsc.
However gtime_to_gtsc can return 0, if time < vtsc_offset, which can
actually happen when gtime_to_gtsc is called passing stime_local_stamp
(the caller function is __update_vcpu_system_time).
In that case the pvclock protocol doesn't work properly and the guest is
unable to calculate the system time correctly. As a consequence when the
guest tries to set a timer event (for example calling the
VCPUOP_set_singleshot_timer hypercall), the event will be in the past
causing Linux to hang.
The purpose of the pvclock protocol is to allow the guest to calculate
the system_time in nanosec correctly. The guest calculates as follow:
Given that with vtsc=1:
rdtsc = to_vtsc_scale(NOW() - vtsc_offset)
vcpu_time_info.tsc_timestamp = to_vtsc_scale(vcpu_time_info.system_time - vtsc_offset)
The expression evaluates to NOW(), which is what we want. However when
stime_local_stamp < vtsc_offset, vcpu_time_info.tsc_timestamp is
actually 0. As a consequence the calculated overall system_time is not
correct.
This patch fixes the issue by letting gtime_to_gtsc return a negative
integer in the form of a wrapped around unsigned integer, thus when the
guest subtracts vcpu_time_info.tsc_timestamp from rdtsc will calculate
the right value.
Signed-off-by: Jan Beulich <JBeulich@suse.com> Signed-off-by: Stefano Stabellini <sstabellini@kernel.org> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: d22c9bf7c3067b17cbd9cdfd8b81941dd6fb8d77
master date: 2016-04-28 15:06:56 +0200
During the investigation of very slow dump times of guest images in
Amazon EC2 instance, it was discovered that the
register_oldmem_pfn_is_ram() API implemented by the upstream kernel
commit 997c136f518c5debd63847e78e2a8694f56dcf90:
fs/proc/vmcore.c: add hook to read_from_oldmem() to check
for non-ram pages
was not being called. This was due to the PV driver with the call
to register_oldmem_pfn_is_ram() API was not including the
kernel header file that is used to communicate support of the API in the
kernel. Fix the issue by including the required header file.
Signed-off-by: Mike Meyer <mike.meyer@teradata.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Olaf Hering <olaf@aepfle.de>
Jan Beulich [Mon, 9 May 2016 11:05:42 +0000 (13:05 +0200)]
x86/HVM: fix forwarding of internally cached requests
Forwarding entire batches to the device model when an individual
iteration of them got rejected by internal device emulation handlers
with X86EMUL_UNHANDLEABLE is wrong: The device model would then handle
all iterations, without the internal handler getting to see any past
the one it returned failure for. This causes misbehavior in at least
the MSI-X and VGA code, which want to see all such requests for
internal tracking/caching purposes. But note that this does not apply
to buffered I/O requests.
This in turn means that the condition in hvm_process_io_intercept() of
when to crash the domain was wrong: Since X86EMUL_UNHANDLEABLE can
validly be returned by the individual device handlers, we mustn't
blindly crash the domain if such occurs on other than the initial
iteration. Instead we need to distinguish hvm_copy_*_guest_phys()
failures from device specific ones, and then the former need to always
be fatal to the domain (i.e. also on the first iteration), since
otherwise we again would end up forwarding a request to qemu which the
internal handler didn't get to see.
The adjustment should be okay even for stdvga's MMIO handling:
- if it is not caching then the accept function would have failed so we
won't get into hvm_process_io_intercept(),
- if it issued the buffered ioreq then we only get to the p->count
reduction if hvm_send_ioreq() actually encountered an error (in which
we don't care about the request getting split up).
Also commit 4faffc41d ("x86/hvm: limit reps to avoid the need to handle
retry") went too far in removing code from hvm_process_io_intercept():
When there were successfully handled iterations, the function should
continue to return success with a clipped repeat count.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
x86/HVM: fix forwarding of internally cached requests (part 2)
Commit 96ae556569 ("x86/HVM: fix forwarding of internally cached
requests") wasn't quite complete: hvmemul_do_io() also needs to
propagate up the clipped count. (I really should have re-tested the
forward port resulting in the earlier change, instead of relying on the
testing done on the older version of Xen which the fix was first needed
for.)
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Paul Durrant <paul.durrant@citrix.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 96ae556569b8eaedc0bb242932842c3277b515d8
master date: 2016-03-31 14:52:04 +0200
master commit: 670ee15ac1e3de7c15381fdaab0e531489b48939
master date: 2016-04-28 15:09:26 +0200
David Vrabel [Mon, 9 May 2016 11:05:13 +0000 (13:05 +0200)]
x86/fpu: improve check for XSAVE* not writing FIP/FDP fields
The hardware may not write the FIP/FDP fields with a XSAVE*
instruction. e.g., with XSAVEOPT/XSAVES if the state hasn't changed
or on AMD CPUs when a floating point exception is not pending. We
need to identify this case so we can correctly apply the check for
whether to save/restore FCS/FDS.
By poisoning FIP in the saved state we can check if the hardware
writes to this field. The poison value is both: a) non-canonical; and
b) random with a vanishingly small probability of matching a value
written by the hardware (1 / (2^63) = 10^-19).
The poison value is fixed and thus knowable by a guest (or guest
userspace). This could allow the guest to cause Xen to incorrectly
detect that the field has not been written. But: a) this requires the
FIP register to be a full 64 bits internally which is not the case for
all current AMD and Intel CPUs; and b) this only allows the guest (or
a guest userspace process) to corrupt its own state (i.e., it cannot
affect the state of another guest or another user space process).
This results in smaller code with fewer branches and is more
understandable.
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Intel confirmed that 64-bit {F,}XRSTOR sign-extend FIP from bit 47.
While leaving the description above intact, modify the code comment
accordingly.
David Vrabel [Mon, 9 May 2016 11:04:26 +0000 (13:04 +0200)]
x86/hvm: add HVM_PARAM_X87_FIP_WIDTH
The HVM parameter HVM_PARAM_X87_FIP_WIDTH to allow tools and the guest
to adjust the width of the FIP/FDP registers to be saved/restored by
the hypervisor. This is in case the hypervisor hueristics do not do
the right thing.
Add this parameter to the set saved during domain save/migrate.
Signed-off-by: David Vrabel <david.vrabel@citrix.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Wei Liu <wei.liu2@citrix.com>
master commit: 5d768fb1f3f7b011e7b6e75909c7f4841730de60
master date: 2016-02-26 12:30:11 +0100
David Vrabel [Mon, 9 May 2016 11:03:15 +0000 (13:03 +0200)]
x86/fpu: add a per-domain field to set the width of FIP/FDP
The x86 architecture allows either: a) the 64-bit FIP/FDP registers to
be restored (clearing FCS and FDS); or b) the 32-bit FIP/FDP and
FCS/FDS registers to be restored (clearing the upper 32-bits).
Add a per-domain field to indicate which of these options a guest
needs. The options are: 8, 4 or 0. Where 0 indicates that the
hypervisor should automatically guess the FIP width by checking the
value of FIP/FDP when saving the state (this is the existing
behaviour).
The FIP width is initially automatic but is set explicitly in the
following cases:
- 32-bit PV guest: 4
- Newer CPUs that do not save FCS/FDS: 8
The x87_fip_width field is placed into an existing 1 byte hole in
struct arch_domain.
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Fix build.
Tim Deegan [Wed, 16 Mar 2016 16:56:04 +0000 (16:56 +0000)]
x86: limit GFNs to 32 bits for shadowed superpages.
Superpage shadows store the shadowed GFN in the backpointer field,
which for non-BIGMEM builds is 32 bits wide. Shadowing a superpage
mapping of a guest-physical address above 2^44 would lead to the GFN
being truncated there, and a crash when we come to remove the shadow
from the hash table.
Track the valid width of a GFN for each guest, including reporting it
through CPUID, and enforce it in the shadow pagetables. Set the
maximum witth to 32 for guests where this truncation could occur.
This is XSA-173.
Reported-by: Ling Liu <liuling-it@360.cn> Signed-off-by: Tim Deegan <tim@xen.org> Signed-off-by: Jan Beulich <jbeulich@suse.com>
Jan Beulich [Tue, 29 Mar 2016 13:19:51 +0000 (15:19 +0200)]
x86: fix information leak on AMD CPUs
The fix for XSA-52 was wrong, and so was the change synchronizing that
new behavior to the FXRSTOR logic: AMD's manuals explictly state that
writes to the ES bit are ignored, and it instead gets calculated from
the exception and mask bits (it gets set whenever there is an unmasked
exception, and cleared otherwise). Hence we need to follow that model
in our workaround.
This is CVE-2016-3158 / CVE-2016-3159 / XSA-172.
[xen/arch/x86/xstate.c:xrstor: CVE-2016-3158]
[xen/arch/x86/i387.c:fpu_fxrstor: CVE-2016-3159]
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 7bd9dc3adfbb014c55f0928ebb3b20950ca9c019
master date: 2016-03-29 14:24:26 +0200
Ross Lagerwall [Fri, 18 Mar 2016 07:09:54 +0000 (08:09 +0100)]
vmx: restore debug registers when injecting #DB traps
Commit a929bee0e652 ("x86/vmx: Fix injection of #DB traps following
XSA-156") prevents an infinite loop in certain #DB traps. However, it
changed the behavior to not call hvm_hw_inject_trap() for #DB and #AC
traps which which means that the debug registers are not restored
correctly and nullified commit b56ae5b48c38 ("VMX: fix/adjust trap
injection").
To fix this, restore the original code path through hvm_inject_trap(),
but ensure that the struct hvm_trap is populated with all the required
data.
Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Kevin Tian <kevin.tian@intel.com>
master commit: ba22f1f4732acb4d5aebd779122e91753a0e374d
master date: 2016-03-15 12:19:13 +0100
David Vrabel [Fri, 18 Mar 2016 07:09:10 +0000 (08:09 +0100)]
x86: don't flush the whole cache when changing cachability
Introduce the FLUSH_VA_VALID flag to flush_area_mask() and friends to
say that it is safe to use CLFLUSH (i.e., the virtual address is still
valid).
Use this when changing the cachability of the Xen direct mappings (in
response to the guest changing the cachability of its mappings). This
significantly improves performance by avoiding an expensive WBINVD.
This fixes a performance regression introduced by c61a6f74f80eb36ed83a82f713db3143159b9009 (x86: enforce consistent
cachability of MMIO mappings), the fix for XSA-154.
e.g., A set_memory_wc() call in Linux:
before: 4097 us
after: 47 us
Signed-off-by: David Vrabel <david.vrabel@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: dff593c7b6eb1cfd4591b662a880a0c9325cce40
master date: 2016-03-10 16:51:03 +0100
We must ensure that the prod/cons are only read once and that
the compiler won't try to optimize the reads. That is split
the read of these in multiple instructions influencing later
branch code. As such insert barriers when fetching the cons
and prod index.
Jan Beulich [Fri, 4 Mar 2016 12:16:07 +0000 (13:16 +0100)]
x86emul: limit-check branch targets
All branches need to #GP when their target violates the segment limit
(in 16- and 32-bit modes) or is non-canonical (in 64-bit mode). For
near branches facilitate this via a zero-byte instruction fetch from
the target address (resulting in address translation and validation
without an actual read from memory), while far branches get dealt with
by breaking up the segment register loading into a read-and-validate
part and a write one. The latter at once allows correcting some
ordering issues in how the individual emulation steps get carried out:
Before updating machine state, all exceptions unrelated to that state
updating should have got raised (i.e. the only ones possibly resulting
in partly updated state are faulting memory writes [pushes]).
Note that while not immediately needed here, write and distinct read
emulation routines get updated to deal with zero byte accesses too, for
overall consistency.
Reported-by: 刘令 <liuling-it@360.cn> Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Tim Deegan <tim@xen.org>
master commit: 81d3a0b26c1672c60b2a54dd8780e6f6472d2328
master date: 2016-02-26 12:14:39 +0100
Jan Beulich [Fri, 4 Mar 2016 12:14:39 +0000 (13:14 +0100)]
x86emul: fix rIP handling
Deal with rIP just like with any other register: Truncate to designated
width upon entry, write back the zero-extended 32-bit value when
emulating 32-bit code, and leave the upper 48 bits unchanged for 16-bit
code.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 0640ffb67fb92e2561c63b9308c27b71281fdd72
master date: 2016-02-18 15:05:34 +0100
Each ITARGETSR register are 4-byte wide and the offset is in byte.
The current implementation is computing the end of the range wrongly
resulting to emulate only ITARGETSR{0,1} read-only. The rest will be
treated as read-write.
As 8 registers should be read-only, the end of the range should be
ITARGETSR + (4 * 8) - 1.
For convenience introduce ITARGETSR7 and ITARGETSR8.
Julien Grall [Fri, 4 Mar 2016 12:11:20 +0000 (13:11 +0100)]
xen/arm: vgic-v2: Report the correct GICC size to the guest
The GICv2 DT node is usually used by the guest to know the address/size
of the regions (GICD, GICC...) to map into their virtual memory.
While the GICv2 spec requires the size of the GICC to be 8KB, we
correctly do an 8KB stage-2 mapping but erroneously report 256 in the
device tree (based on GUEST_GICC_SIZE).
I bet we didn't see any issue so far because all the registers except
GICC_DIR lives in the first 256 bytes of the GICC region and all the
guests I have seen so far are driving the GIC with GICC_CTLR.EIOmode =
0.
Signed-off-by: Julien Grall <julien.grall@citrix.com> Acked-by: Ian Campbell <ian.campbell@citrix.com>
[ ijc -- fixed some typos in commit message ]
Ian Campbell [Thu, 5 Nov 2015 14:46:12 +0000 (14:46 +0000)]
tools: pygrub: if partition table is empty, try treating as a whole disk
pygrub (in identify_disk_image()) detects a DOS style partition table
via the presence of the 0xaa55 signature at the end of the first
sector of the disk.
However this signature is also present in whole-disk configurations
when there is an MBR on the disk. Many filesystems (e.g. ext[234])
include leading padding in their on disk format specifically to enable
this.
So if we think we have a DOS partition table but do not find any
actual partition table entries we may as well try looking at it as a
whole disk image. Worst case is we probe and find there isn't anything
there.
This was reported by Sjors Gielen in Debian bug #745419. The fix was
inspired by a patch by Adi Kriegisch in
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=745419#27
Tested by genext2fs'ing my /boot into a new raw image (works) and
then:
dd if=/usr/lib/grub/i386-pc/g2ldr.mbr of=img conv=notrunc bs=512 count=1
to add an MBR (with 0xaa55 signature) to it, which after this patch
also works.
Dirk Behme [Thu, 18 Feb 2016 14:25:43 +0000 (15:25 +0100)]
xen/arm64: Make sure we get all debug output
Starting in the wrong ELx mode I get the following debug output:
...
- Current EL 00000004 -
- Xen must be entered in NS EL2 mode -
- Boot failed -
The output of "Please update the bootloader" is missing here, because
string concatenation in gas, unlike in C, keeps the \0 between each
individual string.
Make sure this is output, too. With this, we get
...
- Current EL 00000004 -
- Xen must be entered in NS EL2 mode -
- Please update the bootloader -
- Boot failed -
as intended.
Signed-off-by: Dirk Behme <dirk.behme@de.bosch.com> Acked-by: Ian Campbell <ian.campbell@citrix.com>
[ ijc -- added same change to arm32 case ]
master commit: c31d34082555566eb27d0d1eb42962f72fa886d3
master date: 2016-02-18 10:13:42 +0000
Anthony PERARD [Wed, 17 Feb 2016 15:49:49 +0000 (16:49 +0100)]
hvmloader: fix scratch_alloc to avoid overlaps
scratch_alloc() set scratch_start to the last byte of the current
allocation. The value of scratch_start is then reused as is (if it is
already aligned) in the next allocation. This result in a potential reuse
of the last byte of the previous allocation.
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 4ab3ac074cb1f101f42e02103fa263a1f4f422b5
master date: 2016-02-10 14:46:45 +0100
Jan Beulich [Wed, 17 Feb 2016 15:49:18 +0000 (16:49 +0100)]
x86/nHVM: avoid NULL deref during INVLPG intercept handling
When intercepting (or emulating) L1 guest INVLPG, the nested P2M
pointer may be (is?) NULL, and hence there's no point in calling
p2m_flush(). In fact doing so would cause a dereference of that NULL
pointer at least in the ASSERT() right at the beginning of the
function.
While so far nothing supports hap_invlpg() being reachable from the
INVLPG intercept paths (only INVLPG insn emulation would lead there),
and hence the code in question (added by dd6de3ab99 ["Implement
Nested-on-Nested"]) appears to be dead, this seems to be the change
which can be agreed on as an immediate fix. Ideally, however, the
problematic code would go away altogether. See thread at
lists.xenproject.org/archives/html/xen-devel/2016-01/msg03762.html.
Reported-by: 刘令 <liuling-it@360.cn> Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: George Dunlap <george.dunlap@citrix.com>
master commit: 86c59615f4e7f38df24182f20d9dbdec3299c514
master date: 2016-02-09 13:22:13 +0100
Juergen Gross [Wed, 17 Feb 2016 15:48:37 +0000 (16:48 +0100)]
credit: recalculate per-cpupool credits when updating timeslice
When modifying the timeslice of the credit scheduler in a cpupool the
cpupool global credit value (n_cpus * credits_per_tslice) isn't
recalculated. This will lead to wrong scheduling decisions later.
Juergen Gross [Wed, 17 Feb 2016 15:48:16 +0000 (16:48 +0100)]
credit: update timeslice under lock
When updating the timeslice of the credit scheduler protect the
scheduler's private data by it's lock. Today a possible race could
result only in some weird scheduling decisions during one timeslice,
but further adjustments will need the lock anyway.
Andrew Cooper [Wed, 17 Feb 2016 15:47:52 +0000 (16:47 +0100)]
x86/vmx: don't clobber exception_bitmap when entering/leaving emulated real mode
Most updates to the exception bitmaps set or clear an individual bits.
However, entering or exiting emulated real mode unilaterally clobbers it,
leaving the exit code to recalculate what it should have been. This is error
prone, and indeed currently fails to recalculate the TRAP_no_device intercept
appropriately.
Instead of overwriting exception_bitmap when entering emulated real mode, move
the override into vmx_update_exception_bitmap() and leave exception_bitmap
unmodified.
This means that recalculation is unnecessary, and that the use of
vmx_fpu_leave() and vmx_update_debug_state() while in emulated real mode
doesn't result in TRAP_no_device and TRAP_int3 being un-intercepted.
This is only a functional change on hardware lacking unrestricted guest
support.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Kevin Tian <kevin.tian@intel.com>
master commit: 78c93adf0a7f6a7abe249a63e7398ca1221a6d25
master date: 2016-02-02 14:00:52 +0100
Ian Campbell [Wed, 17 Feb 2016 15:47:21 +0000 (16:47 +0100)]
x86/mce: fix misleading indentation in init_nonfatal_mce_checker()
Debian bug 812166[0] reported this build failure due to
Wmisleading-indentation with gcc-6:
non-fatal.c: In function 'init_nonfatal_mce_checker':
non-fatal.c:103:2: error: statement is indented as if it were guarded by... [-Werror=misleading-indentation]
switch (c->x86_vendor) {
^~~~~~
non-fatal.c:97:5: note: ...this 'if' clause, but it is not
if ( __get_cpu_var(poll_bankmask) == NULL )
^~
I was unable to reproduce (xen builds cleanly for me with "6.0.0 20160117
(experimental) [trunk revision 232481]") but looking at the code the issue
above is clearly real.
Correctly reindent the if statement.
This file uses Linux coding style (infact the use of Xen style for
this line is the root cause of the wanring) so use tabs and while
there remove the whitespace inside the if as Linux does.
Jan Beulich [Wed, 17 Feb 2016 15:46:52 +0000 (16:46 +0100)]
x86: fix (and simplify) MTRR overlap checking
Obtaining one individual range per variable range register (via
get_mtrr_range()) was bogus from the beginning, as these registers may
cover multiple disjoint ranges. Do away with that, in favor of simply
comparing masked addresses.
Also, for is_var_mtrr_overlapped()'s result to be correct when called
from mtrr_wrmsr(), generic_set_mtrr() must update saved state first.
As minor cleanup changes, constify is_var_mtrr_overlapped()'s parameter
and make mtrr_wrmsr() static.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 3272230848f36eb5bbb660216898a90048a81d9f
master date: 2016-01-21 16:11:04 +0100
Jan Beulich [Wed, 17 Feb 2016 15:43:56 +0000 (16:43 +0100)]
x86/VMX: sanitize rIP before re-entering guest
... to prevent guest user mode arranging for a guest crash (due to
failed VM entry). (On the AMD system I checked, hardware is doing
exactly the canonicalization being added here.)
Note that fixing this in an architecturally correct way would be quite
a bit more involved: Making the x86 instruction emulator check all
branch targets for validity, plus dealing with invalid rIP resulting
from update_guest_eip() or incoming directly during a VM exit. The only
way to get the latter right would be by not having hardware do the
injection.
Note further that there are a two early returns from
vmx_vmexit_handler(): One (through vmx_failed_vmentry()) leads to
domain_crash() anyway, and the other covers real mode only and can
neither occur with a non-canonical rIP nor result in an altered rIP,
so we don't need to force those paths through the checking logic.
This is CVE-2016-2271 / XSA-170.
Reported-by: 刘令 <liuling-it@360.cn> Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: ffbbfda37782a2408953af1a3e00ada80bb141bc
master date: 2016-02-17 16:18:08 +0100
Jan Beulich [Wed, 17 Feb 2016 15:43:21 +0000 (16:43 +0100)]
x86: enforce consistent cachability of MMIO mappings
We've been told by Intel that inconsistent cachability between
multiple mappings of the same page can affect system stability only
when the affected page is an MMIO one. Since the stale data issue is
of no relevance to the hypervisor (since all guest memory accesses go
through proper accessors and validation), handling of RAM pages
remains unchanged here. Any MMIO mapped by domains however needs to be
done consistently (all cachable mappings or all uncachable ones), in
order to avoid Machine Check exceptions. Since converting existing
cachable mappings to uncachable (at the time an uncachable mapping
gets established) would in the PV case require tracking all mappings,
allow MMIO to only get mapped uncachable (UC, UC-, or WC).
This also implies that in the PV case we mustn't use the L1 PTE update
fast path when cachability flags get altered.
Since in the HVM case at least for now we want to continue honoring
pinned cachability attributes for pages not mapped by the hypervisor,
special case handling of r/o MMIO pages (forcing UC) gets added there.
Arguably the counterpart change to p2m-pt.c may not be necessary, since
UC- (which already gets enforced there) is probably strict enough.
Note that the shadow code changes include fixing the write protection
of r/o MMIO ranges: shadow_l1e_remove_flags() and its siblings, other
than l1e_remove_flags() and alike, return the new PTE (and hence
ignoring their return values makes them no-ops).
This is CVE-2016-2270 / XSA-154.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: c61a6f74f80eb36ed83a82f713db3143159b9009
master date: 2016-02-17 16:16:53 +0100
Ian Campbell [Wed, 20 Jan 2016 13:06:22 +0000 (14:06 +0100)]
docs: correct descriptions of gnttab_max_{, maptrack}_frames
gnttab_max_frames incorrectly referred to numbers of grant tab
operations and gnttab_max_maptrack_frames was confusingly worded.
Add the default for gnttab_max_frames while here (it's currently the
same on all arches since no arch uses the available arch override) and
adjust the default for gnttab_max_maptrack_frames to match the normal
form.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: ef17887d848dae0ca46231b47bf30d3c1d4aa87d
master date: 2016-01-19 16:24:44 +0000
Andrew Cooper [Wed, 20 Jan 2016 13:05:48 +0000 (14:05 +0100)]
x86/vmx: Fix injection of #DB traps following XSA-156
Most #DB exceptions are traps rather than faults, meaning that the instruction
pointer in the exception frame points after the instruction rather than at it.
However, VMX intercepts all have fault semantics, even when intercepting a
trap. Re-injecting an intercepted trap as a fault causes an infinite loop in
the guest, by re-executing the same trapping instruction repeatedly. This
breaks debugging inside the guest.
Introduce a helper which copies VM_EXIT_INTR_INTO to VM_ENTRY_INTR_INFO, and
use it to mirror the intercepted interrupt back to the guest.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Kevin Tian <kevin.tian@intel.com>
master commit: 0747bc8b4d85f3fc0ee1e58418418fa0229e8ff8
master date: 2016-01-05 11:28:56 +0000
Jan Beulich [Wed, 20 Jan 2016 13:03:02 +0000 (14:03 +0100)]
x86/VMX: prevent INVVPID failure due to non-canonical guest address
While INVLPG (and on SVM INVLPGA) don't fault on non-canonical
addresses, INVVPID fails (in the "individual address" case) when passed
such an address.
Since such intercepted INVLPG are effectively no-ops anyway, don't fix
this in vmx_invlpg_intercept(), but instead have paging_invlpg() never
return true in such a case.
This is CVE-2016-1571 / XSA-168.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Ian Campbell <ian.campbell@citrix.com>
master commit: bf05e88ed7342a91cceba050b6c622accb809842
master date: 2016-01-20 13:50:10 +0100
Andrew Cooper [Tue, 10 Nov 2015 10:46:44 +0000 (10:46 +0000)]
tools/ocaml/xb: Correct calculations of data/space the ring
ml_interface_{read,write}() would miscalculate the quantity of
data/space in the ring if it crossed the ring boundary, and incorrectly
return a short read/write.
This causes a protocol stall, as either side of the ring ends up waiting
for what they believe to be the other side needing to take the next
action.
Correct the calculations to cope with crossing the ring boundary.
In addition, correct the error detection. It is a hard error if the
producer index gets more than a ring size ahead of the consumer, or if
the consumer ever overtakes the producer.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Wei Liu <wei.liu2@citrix.com> Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org> Reviewed-by: David Scott <dave@recoil.org>
(cherry picked from commit 8a2c11f876e6cf9c74f2bcaed5a997adc57da888)
(cherry picked from commit 6150df9f3f99ecbcbd9917002186d1d895b5602e)
In Quota.merge, we merge two quota hashtables, orig_quota and mod_quota, putting
the results into dest_quota. These hashtables map domids to the number of
entries currently owned by that domain.
When mod_quota contains an entry for a domid that was not present in orig_quota
(or dest_quota), the call to get_entry caused Quota.merge to raise a Not_found
exception. This propagates back to the client as an ENOENT error, which is not
an appropriate return value from some operations, such as transaction_end.
This situation can arise when a transaction that introduces a domain (hence
calling Quota.add_entry) needs to be coalesced due to concurrent xenstore
activity.
This patch handles the merge in the case where mod_quota contains an entry not
present in orig_quota (or in dest_quota) by treating that hashtable as having
existing value 0.
Apropos of discussion in
"OVMF related osstest failures on multiple branches"
http://lists.xenproject.org/archives/html/xen-devel/2016-01/msg00442.html
We believe the older ovmf.git does not work when built with the gcc in
Debian jessie. We do not know where this bug lies but we are fixing
it by updating ovmf.
We have decided that we are not in a position to review the changes to
OVMF upstream, and ourselves decide what to cherry pick. Instead we
will update the revision wholesale and use the xen.git stable
branches' push gate.
Dario Faggioli [Fri, 20 Jun 2014 14:09:00 +0000 (16:09 +0200)]
blktap: Fix two 'maybe uninitialized' variables
[ Cross-ported to blktap1 from 345e44a85d71a
"blktap2: Fix two 'maybe uninitialized' variables" -iwj;
Remainder of commit message is from blktap2's version. ]
for which gcc 4.9.0 complains about, like this:
block-qcow.c: In function `get_cluster_offset':
block-qcow.c:431:3: error: `tmp_ptr' may be used uninitialized in this function
[-Werror=maybe-uninitialized]
memcpy(tmp_ptr, l1_ptr, 4096);
^
block-qcow.c:606:7: error: `tmp_ptr2' may be used uninitialized in this
function [-Werror=maybe-uninitialized]
if (write(s->fd, tmp_ptr2, 4096) != 4096) {
^
cc1: all warnings being treated as errors
/home/dario/Sources/xen/xen/xen.git/tools/blktap2/drivers/../../../tools/Rules.mk:89:
recipe for target 'block-qcow.o' failed
make[5]: *** [block-qcow.o] Error 1
The proper behavior is to return upon allocation failure.
About what to return, 0 seems the best option, looking
at both the function and the call sites.
Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com> Backport-requested-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Jan Beulich [Thu, 17 Dec 2015 13:29:28 +0000 (14:29 +0100)]
x86/HVM: avoid reading ioreq state more than once
Otherwise, especially when the compiler chooses to translate the
switch() to a jump table, unpredictable behavior (and in the jump table
case arbitrary code execution) can result.
This is XSA-166.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Ian Campbell <ian.campbell@citrix.com>
master commit: b452430a4cdfc801fa4bc391aed7522365e1deb6
master date: 2015-12-17 14:22:46 +0100
Jan Beulich [Tue, 15 Dec 2015 14:39:52 +0000 (15:39 +0100)]
VT-d: drop unneeded Ivybridge quirk workaround
We've been told by Intel that server chipsets don't need the workaround
anymore starting with Ivybridge (Xeon E5/E7 v2); the second half of the
workaround was missing anyway.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Kevin Tian <kevin.tian@intel.com>
master commit: a10307b3912e65bbdd9184ba5fe849d252b75f92
master date: 2015-12-03 15:33:10 +0100
David Vrabel [Tue, 15 Dec 2015 14:39:20 +0000 (15:39 +0100)]
evtchn: don't reuse ports that are still "busy"
When using the FIFO ABI a guest may close an event channel that is
still LINKED. If this port is reused, subsequent events may be lost
because they may become pending on the wrong queue.
This could be fixed by requiring guests to only close event channels
that are not linked. This is difficult since: a) irq cleanup in the
guest may be done in a context that cannot wait for the event to be
unlinked; b) the guest may attempt to rebind a PIRQ whose previous
close is still pending; and c) existing guests already have the
problematic behaviour.
Instead, simply check a port is not "busy" (i.e., it's not linked)
before reusing it.
Guests should still drain any queues for VCPUs that are being
offlined, or the port will become unusable until the VCPU is onlined
and starts processing events again.
Signed-off-by: David Vrabel <david.vrabel@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 78e24c269b0a4a8b864ece725e6d4209ed95dfa7
master date: 2015-12-02 15:21:46 +0100
David Vrabel [Tue, 15 Dec 2015 14:38:38 +0000 (15:38 +0100)]
x86/ept: remove unnecessary sync after resolving misconfigured entries
When using EPT, type changes are done with the following steps:
1. Set entry as invalid (misconfigured) by settings a reserved memory
type.
2. Flush all EPT and combined translations (ept_sync_domain()).
3. Fixup misconfigured entries as required (on EPT_MISCONFIG vmexits or
when explicitly setting an entry.
Since resolve_misconfig() only updates entries that were misconfigured,
there is no need to invalidate any translations since the hardware
does not cache misconfigured translations (vol 3, section 28.3.2).
Remove the unnecessary (and very expensive) ept_sync_domain() calls).
Signed-off-by: David Vrabel <david.vrabel@citrix.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Kevin Tian <kevin.tian@intel.com>
master commit: cea357ba4b3335ca5279ee9c00838f85575d5842
master date: 2015-12-02 15:19:53 +0100
Daniel Kiper [Tue, 15 Dec 2015 14:37:55 +0000 (15:37 +0100)]
x86/boot: check for not allowed sections before linking
Currently check for not allowed sections is performed just after
compilation. However, if compilation succeeds and check fails then
second build will create xen.gz/xen.efi without any visible error.
This happens because %.o: %.c recipe created object file during first
run and make do not execute this recipe during second run. So, look
for not allowed sections before linking. This way check will be
executed every time.
Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: d380b3559734739ae009cd3c0e9aabb5602863e2
master date: 2015-11-25 17:24:36 +0100
Dario Faggioli [Tue, 15 Dec 2015 14:36:26 +0000 (15:36 +0100)]
sched: fix locking for insert_vcpu() in credit1 and RTDS
The insert_vcpu() hook is handled with inconsistent locking.
In fact, schedule_cpu_switch() calls the hook with runqueue
lock held, while sched_move_domain() relies on the hook
implementations to take the lock themselves (and, since that
is not done in Credit1 and RTDS, such operation is not safe
in those cases).
This is fixed as follows:
- take the lock in the hook implementations, in specific
schedulers' code;
- avoid calling insert_vcpu(), for the idle vCPU, in
schedule_cpu_switch(). In fact, idle vCPUs are set to run
immediately, and the various schedulers won't insert them
in their runqueues anyway, even when explicitly asked to.
While there, still in schedule_cpu_switch(), locking with
_irq() is enough (there's no need to do *_irqsave()).
Jan Beulich [Tue, 15 Dec 2015 14:36:01 +0000 (15:36 +0100)]
VMX: fix/adjust trap injection
In the course of investigating the 4.1.6 backport issue of the XSA-156
patch I realized that #DB injection has always been broken, but with it
now getting always intercepted the problem has got worse: Documentation
clearly states that neither DR7.GD nor DebugCtl.LBR get cleared before
the intercept, so this is something we need to do before reflecting the
intercepted exception.
While adjusting this (and also with 4.1.6's strange use of
X86_EVENTTYPE_SW_EXCEPTION for #DB in mind) I further realized that
the special casing of individual vectors shouldn't be done for
software interrupts (resulting from INT $nn).
And then some code movement: Setting of CR2 for #PF can be done in the
same switch() statement (no need for a separate if()), and reading of
intr_info is better done close the the consumption of the variable
(allowing the compiler to generate better code / use fewer registers
for variables).
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Kevin Tian <kevin.tian@intel.com>
master commit: 81a28f14009f4d8577a81b28dd06f6828112054b
master date: 2015-11-24 12:30:31 +0100
Jan Beulich [Wed, 9 Dec 2015 12:55:58 +0000 (13:55 +0100)]
memory: fix XSA-158 fix
For one the uses of domu_max_order and ptdom_max_order were swapped.
And then gcc warns about an unused result of a __must_check function
in the control part of a conditional expression when both other
expressions can be determined by the compiler to produce the same value
(see https://gcc.gnu.org/bugzilla/show_bug.cgi?id=68039), which happens
when HAS_PASSTHROUGH is undefined (i.e. for ARM on 4.4 and older).
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Ian Campbell <ian.campbell@citrix.com>
master commit: ff841cead287d7913901ba5c4e7628a6958b5bea
master date: 2015-12-09 13:53:13 +0100
Ian Jackson [Wed, 18 Nov 2015 15:34:54 +0000 (15:34 +0000)]
libxl: Fix bootloader-related virtual memory leak on pv build failure
The bootloader may call libxl__file_reference_map(), which mmap's the
pv_kernel and pv_ramdisk into process memory. This was only unmapped,
however, on the success path of libxl__build_pv(). If there were a
failure anywhere between libxl_bootloader.c:parse_bootloader_result()
and the end of libxl__build_pv(), the calls to
libxl__file_reference_unmap() would be skipped, leaking the mapped
virtual memory.
Ideally this would be fixed by adding the unmap calls to the
destruction path for libxl__domain_build_state. Unfortunately the
lifetime of the libxl__domain_build_state is opaque, and it doesn't
have a proper destruction path. But, the only thing in it that isn't
from the gc are these bootloader references, and they are only ever
set for one libxl__domain_build_state, the one which is
libxl__domain_create_state.build_state.
So we can clean up in the exit path from libxl__domain_create_*, which
always comes through domcreate_complete.
Remove the now-redundant unmaps in libxl__build_pv's success path.
This is XSA-160.
Signed-off-by: George Dunlap <george.dunlap@citrix.com> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com> Tested-by: George Dunlap <george.dunlap@citrix.com> Acked-by: Ian Campbell <ian.campbell@citrix.com>
(cherry picked from commit 59543a7cc218e9d466810409088f3015f259078c)
Jan Beulich [Tue, 8 Dec 2015 13:09:20 +0000 (14:09 +0100)]
memory: fix XENMEM_exchange error handling
assign_pages() can fail due to the domain getting killed in parallel,
which should not result in a hypervisor crash.
Reported-by: Julien Grall <julien.grall@citrix.com>
Also delete a redundant put_gfn() - all relevant paths leading to the
"fail" label already do this (and there are also paths where it was
plain wrong). All of the put_gfn()-s got introduced by 51032ca058
("Modify naming of queries into the p2m"), including the otherwise
unneeded initializer for k (with even a kind of misleading comment -
the compiler warning could actually have served as a hint that the use
is wrong).
This is CVE-2015-8339 + CVE-2015-8340 / XSA-159.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Ian Campbell <ian.campbell@citrix.com>
master commit: eedecb3cf0b2ce1ffc2eb08f3c73f88d42c382c9
master date: 2015-12-08 14:01:43 +0100
Ian Campbell [Thu, 10 Sep 2015 13:31:34 +0000 (14:31 +0100)]
Config: Switch to unified qemu trees.
Upstream qemu is now in qemu-xen.git and the trad fork is in
qemu-xen-traditional.git.
QEMU_UPSTREAM_REVISION is currently a tag and
QEMU_TRADITIONAL_REVISION is a specific revision, so no changes are
required to those.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Conflicts:
Config.mk Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com> Acked-by: Ian Campbell <ian.campbell@citrix.com>
(cherry picked from commit 78833c04250416f1870c458309d3ac0e5cf915fd)
Jan Beulich [Tue, 10 Nov 2015 11:18:37 +0000 (12:18 +0100)]
x86/HVM: always intercept #AC and #DB
Both being benign exceptions, and both being possible to get triggered
by exception delivery, this is required to prevent a guest from locking
up a CPU (resulting from no other VM exits occurring once getting into
such a loop).
The specific scenarios:
1) #AC may be raised during exception delivery if the handler is set to
be a ring-3 one by a 32-bit guest, and the stack is misaligned.
This is CVE-2015-5307 / XSA-156.
Reported-by: Benjamin Serebrin <serebrin@google.com>
2) #DB may be raised during exception delivery when a breakpoint got
placed on a data structure involved in delivering the exception. This
can result in an endless loop when a 64-bit guest uses a non-zero IST
for the vector 1 IDT entry, but even without use of IST the time it
takes until a contributory fault would get raised (results depending
on the handler) may be quite long.
This is CVE-2015-8104 / XSA-156.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: bd2239d9fa975a1ee5bcd27c218ae042cd0a57bc
master date: 2015-11-10 12:03:08 +0100
Ian Jackson [Wed, 21 Oct 2015 15:18:30 +0000 (16:18 +0100)]
libxl: adjust PoD target by memory fudge, too
PoD guests need to balloon at least as far as required by PoD, or risk
crashing. Currently they don't necessarily know what the right value
is, because our memory accounting is (at the very least) confusing.
Apply the memory limit fudge factor to the in-hypervisor PoD memory
target, too. This will increase the size of the guest's PoD cache by
the fudge factor LIBXL_MAXMEM_CONSTANT (currently 1Mby). This ensures
that even with a slightly-off balloon driver, the guest will be
stable even under memory pressure.
There are two call sites of xc_domain_set_pod_target that need fixing:
The one in libxl_set_memory_target is straightforward.
The one in xc_hvm_build_x86.c:setup_guest is more awkward. Simply
setting the PoD target differently does not work because the various
amounts of memory during domain construction no longer match up.
Instead, we adjust the guest memory target in xenstore (but only for
PoD guests).
This introduces a 1Mby discrepancy between the balloon target of a PoD
guest at boot, and the target set by an apparently-equivalent `xl
mem-set' (or similar) later. This approach is low-risk for a security
fix but we need to fix this up properly in xen.git#staging and
probably also in stable trees.
Jan Beulich [Thu, 29 Oct 2015 13:02:38 +0000 (14:02 +0100)]
x86: rate-limit logging in do_xen{oprof,pmu}_op()
Some of the sub-ops are acessible to all guests, and hence should be
rate-limited. In the xenoprof case, just like for XSA-146, include them
only in debug builds. Since the vPMU code is rather new, allow them to
be always present, but downgrade them to (rate limited) guest messages.
This is CVE-2015-7971 / XSA-152.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Ian Campbell <ian.campbell@citrix.com>
master commit: 95e7415843b94c346e5ba8682665f508f220e04b
master date: 2015-10-29 13:37:19 +0100
Andrew Cooper [Thu, 29 Oct 2015 13:01:47 +0000 (14:01 +0100)]
x86/PoD: Eager sweep for zeroed pages
Based on the contents of a guests physical address space,
p2m_pod_emergency_sweep() could degrade into a linear memcmp() from 0 to
max_gfn, which runs non-preemptibly.
As p2m_pod_emergency_sweep() runs behind the scenes in a number of contexts,
making it preemptible is not feasible.
Instead, a different approach is taken. Recently-populated pages are eagerly
checked for reclaimation, which amortises the p2m_pod_emergency_sweep()
operation across each p2m_pod_demand_populate() operation.
Note that in the case that a 2M superpage can't be reclaimed as a superpage,
it is shattered if 4K pages of zeros can be reclaimed. This is unfortunate
but matches the previous behaviour, and is required to avoid regressions
(domain crash from PoD exhaustion) with VMs configured close to the limit.
This is CVE-2015-7970 / XSA-150.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: George Dunlap <george.dunlap@citrix.com>
master commit: 101ce53266866144e724ed593173bc4098b300b9
master date: 2015-10-29 13:36:25 +0100
Jan Beulich [Thu, 29 Oct 2015 12:59:03 +0000 (13:59 +0100)]
x86: guard against undue super page PTE creation
When optional super page support got added (commit bd1cd81d64 "x86: PV
support for hugepages"), two adjustments were missed: mod_l2_entry()
needs to consider the PSE and RW bits when deciding whether to use the
fast path, and the PSE bit must not be removed from L2_DISALLOW_MASK
unconditionally.
This is CVE-2015-7835 / XSA-148.
Reported-by: "栾尚聪(好风)" <shangcong.lsc@alibaba-inc.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Tim Deegan <tim@xen.org>
master commit: fe360c90ea13f309ef78810f1a2b92f2ae3b30b8
master date: 2015-10-29 13:35:07 +0100
Ian Campbell [Thu, 29 Oct 2015 12:58:38 +0000 (13:58 +0100)]
arm: handle races between relinquish_memory and free_domheap_pages
Primarily this means XENMEM_decrease_reservation from a toolstack
domain.
Unlike x86 we have no requirement right now to queue such pages onto
a separate list, if we hit this race then the other code has already
fully accepted responsibility for freeing this page and therefore
there is no more for relinquish_memory to do.
This is CVE-2015-7814 / XSA-147.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Reviewed-by: Julien Grall <julien.grall@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 1ef01396fdff88b1c3331a09ca5c69619b90f4ea
master date: 2015-10-29 13:34:17 +0100