do_tmem_destroy_pool is checking if pools == NULL. But, pools is a fixed
array.
Clang 3.5 will fail to compile xen/common/tmem.c with the following error:
tmem.c:1848:18: error: comparison of array 'client->pools' equal to a null
pointer is always false [-Werror,-Wtautological-pointer-compare]
if ( client->pools == NULL )
Jan Beulich [Mon, 12 May 2014 15:43:00 +0000 (17:43 +0200)]
x86: fix guest CPUID handling
The way XEN_DOMCTL_set_cpuid got handled so far allowed for surprises
to the caller. With this set of operations
- set leaf A (using array index 0)
- set leaf B (using array index 1)
- clear leaf A (clearing array index 0)
- set leaf B (using array index 0)
- clear leaf B (clearing array index 0)
the entry for leaf B at array index 1 would still be in place, while
the caller would expect it to be cleared.
While looking at the use sites of d->arch.cpuid[] I also noticed that
the allocation of the array needlessly uses the zeroing form - the
relevant fields of the array elements get set in a loop immediately
following the allocation.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Tim Deegan <tim@xen.org>
master commit: 4c0ff6bd54b5a67f8f820f9ed0a89a79f1a26a1c
master date: 2014-05-02 12:09:03 +0200
Paul Durrant [Mon, 12 May 2014 15:42:33 +0000 (17:42 +0200)]
hvm_set_ioreq_page() releases wrong page in error path
The function calls prepare_ring_for_helper() to acquire a mapping for the
given gmfn, then checks (under lock) to see if the ioreq page is already
set up but, if it is, the function then releases the in-use ioreq page
mapping on the error path rather than the one it just acquired. This patch
fixes this bug.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 16e2a7596e9fc86881c73cef57602b2c88155528
master date: 2014-05-02 11:46:32 +0200
Jan Beulich [Mon, 12 May 2014 15:35:18 +0000 (17:35 +0200)]
VT-d: suppress UR signaling for desktop chipsets
Unsupported Requests can be signaled for malformed writes to the MSI
address region, e.g. due to buggy or malicious DMA set up to that
region. These should normally result in IOMMU faults, but don't on
the desktop chipsets dealt with here.
This is CVE-2013-3495 / XSA-59.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Don Dugger <donald.d.dugger@intel.com> Acked-by: Tim Deegan <tim@xen.org> Acked-by: Xiantao Zhang <xiantao.zhang@intel.com>
master commit: d6cb14b34ffc2a830022d059f1aa22bf19dcf55f
master date: 2014-04-25 12:12:38 +0200
Jan Beulich [Mon, 12 May 2014 15:33:41 +0000 (17:33 +0200)]
VT-d: suppress UR signaling for server chipsets
Unsupported Requests can be signaled for malformed writes to the MSI
address region, e.g. due to buggy or malicious DMA set up to that
region. These should normally result in IOMMU faults, but don't on
the server chipsets dealt with here.
IDs 0xe00, 0xe01, and 0xe04 ... 0xe0b (Ivytown) aren't needed here -
Intel confirmed the issue to be fixed in hardware there.
This is CVE-2013-3495 / XSA-59.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Don Dugger <donald.d.dugger@intel.com> Acked-by: Tim Deegan <tim@xen.org> Acked-by: Xiantao Zhang <xiantao.zhang@intel.com>
master commit: d061d200eb92bcb1d86f9b55c6de73e35ce63fdf
master date: 2014-04-25 12:11:55 +0200
Jan Beulich [Tue, 29 Apr 2014 13:31:28 +0000 (15:31 +0200)]
x86/HVM: restrict HVMOP_set_mem_type
Permitting arbitrary type changes here has the potential of creating
present P2M (and hence EPT/NPT/IOMMU) entries pointing to an invalid
MFN (INVALID_MFN truncated to the respective hardware structure field's
width). This would become a problem the latest when something real sat
at the end of the physical address space; I'm suspecting though that
other things might break with such bogus entries.
Along with that drop a bogus (and otherwise becoming stale) log
message.
Afaict the similar operation in p2m_set_mem_access() is safe.
This is XSA-92.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Tim Deegan <tim@xen.org>
master commit: 83bb5eb4d340acebf27b34108fb1dae062146a68
master date: 2014-04-29 15:11:31 +0200
Jan Beulich [Wed, 9 Apr 2014 09:52:21 +0000 (11:52 +0200)]
VMX: fix PAT value seen by guest
The XSA-60 fixes introduced a window during which the guest PAT gets
forced to all zeros. This shouldn't be visible to the guest. Therefore
we need to intercept PAT MSR accesses during that time period.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
master commit: fce79f8ce91dc45f3a4d699ee67c49e6cbeb1197
master date: 2014-04-01 16:49:18 +0200
CPUID[80000008].EAX[23:16] have been given the meaning of the guest
physical address restriction (in case it needs to be smaller than the
host's), hence we need to mirror that into vCPUID[80000008].EAX[7:0].
Enforce a lower limit at the same time, as well as a fixed value for
the virtual address bits, and zero for the guest physical address ones.
In order for the vMTRR code to see these overrides we need to make it
call hvm_cpuid() instead of domain_cpuid(), which in turn requires
special casing (and relaxing) the controlling domain.
This additionally should hide an ordering problem in the tools: Both
xend and xl appear to be restoring a guest from its image before
setting up the CPUID policy in the hypervisor, resulting in
domain_cpuid() returning all zeros and hence the check in
mtrr_var_range_msr_set() failing if the guest previously had more than
the minimum 36 physical address bits.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Tim Deegan <tim@xen.org>
master commit: ef437690af8b75e6758dce77af75a22b63982883
master date: 2014-03-28 13:33:34 +0100
Jan Beulich [Wed, 9 Apr 2014 09:39:08 +0000 (11:39 +0200)]
x86: fix determination of bit count for struct domain allocations
We can't just add in the hole shift value, as the hole may be at or
above the 44-bit boundary. Instead we need to determine the total bit
count until reaching 32 significant (not squashed out) bits in PFN
representations.
Jan Beulich [Wed, 9 Apr 2014 09:38:20 +0000 (11:38 +0200)]
x86/Intel: work around Xeon 7400 series erratum AAI65
Linux commit 40e2d7f9b5dae048789c64672bf3027fbb663ffa ("x86 idle:
Repair large-server 50-watt idle-power regression") tells us that this
applies not just to the named Xeon 7400 series, but also NHM-EX and
WSM-EX; sadly Intel's documentation is so badly searchable that I
wasn't able to locate the respective errata (and hence can't quote
their numbers here).
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Kevin Tian <kevin.tian@intel.com>
master commit: 96d1b237ae9b2f2718bb1c59820701f17d3d86e0
master date: 2014-03-17 16:47:22 +0100
Jan Beulich [Wed, 9 Apr 2014 09:37:15 +0000 (11:37 +0200)]
VT-d: fix RMRR handling
Removing mapped RMRR tracking structures in dma_pte_clear_one() is
wrong for two reasons: First, these regions may cover more than a
single page. And second, multiple devices (and hence multiple devices
assigned to any particular guest) may share a single RMRR (whether
assigning such devices to distinct guests is a safe thing to do is
another question).
Therefore move the removal of the tracking structures into the
counterpart function to the one doing the insertion -
intel_iommu_remove_device(), and add a reference count to the tracking
structure.
Further, for the handling of the mappings of the respective memory
regions to be correct, RMRRs must not overlap. Add a respective check
to acpi_parse_one_rmrr().
And finally, with all of this being VT-d specific, move the cleanup
of the list as well as the structure type definition where it belongs -
in VT-d specific rather than IOMMU generic code.
Note that this doesn't address yet another issue associated with RMRR
handling: The purpose of the RMRRs as well as the way the respective
IOMMU page table mappings get inserted both suggest that these regions
would need to be marked E820_RESERVED in all (HVM?) guests' memory
maps, yet nothing like this is being done in hvmloader. (For PV guests
this would also seem to be necessary, but may conflict with PV guests
possibly assuming there to be just a single E820 entry representing all
of its RAM.)
Samuel Thibault [Fri, 21 Mar 2014 01:56:56 +0000 (02:56 +0100)]
PV-GRUB: fix blk access at end of disk
GRUB usually always loads a whole disk track, even if that means going
beyond the end of the disk. We thus have to gracefully return an error,
instead of letting the blkfront go panic.
Joby Poriyath [Tue, 4 Feb 2014 18:10:35 +0000 (18:10 +0000)]
xen/pygrub: grub2/grub.cfg from RHEL 7 has new commands in menuentry
menuentry in grub2/grub.cfg uses linux16 and initrd16 commands
instead of linux and initrd. Due to this RHEL 7 (beta) guest failed to
boot after the installation.
In addition to this, RHEL 7 menu entries have two different single-quote
delimited strings on the same line, and the greedy grouping for menuentry
parsing gets both strings, and the options inbetween.
Ian Jackson [Mon, 24 Feb 2014 14:19:15 +0000 (14:19 +0000)]
libxl: Fix carefd lock leak in save callout
If libxl_pipe fails we leave the carefd locked, which translates to
the atfork lock remaining held. This would probably cause the process
to deadlock shortly afterwards.
Of course libxl_pipe is very unlikely to fail unless things are
already going very badly. This bug has not been observed anywhere as
far as we are aware.
Ian Jackson [Mon, 24 Feb 2014 14:19:14 +0000 (14:19 +0000)]
libxl: Hold the atfork lock while closing carefd
This avoids the process being forked while a carefd is recorded in the
list but the actual fd has been closed. If that happened, a
subsequent libxl_postfork_child_noexec would attempt to close the fd
again. If we are lucky that results in a harmless warning; but if we
are unlucky the fd number has been reused and we close an unrelated
fd.
This race has not been observed anywhere as far as we are aware.
Ian Campbell [Fri, 14 Sep 2012 09:02:50 +0000 (10:02 +0100)]
xl: do not leak cpupool names.
Valgrind reports:
==3076== 7 bytes in 1 blocks are definitely lost in loss record 1 of 1
==3076== at 0x402458C: malloc (vg_replace_malloc.c:270)
==3076== by 0x406F86D: libxl_cpupoolid_to_name (libxl_utils.c:102)
==3076== by 0x8058742: parse_config_data (xl_cmdimpl.c:639)
==3076== by 0x805BD56: create_domain (xl_cmdimpl.c:1838)
==3076== by 0x805DAED: main_create (xl_cmdimpl.c:3903)
==3076== by 0x804D39D: main (xl.c:285)
And indeed there are several places where xl uses
libxl_cpupoolid_to_name as a boolean to test if the pool name is
valid and leaks the name if it is. Introduce an is_valid helper and
use that instead.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Acked-by: Juergen Gross<juergen.gross@ts.fujitsu.com> Committed-by: Ian Campbell <ian.campbell@citrix.com>
(cherry picked from commit 10a194b1c57de7ddc9d4fce07e01f2cd7d0ca26a)
Jan Beulich [Fri, 14 Mar 2014 16:52:27 +0000 (17:52 +0100)]
x86/HVM: consolidate passthrough handling in epte_get_entry_emt()
It is inconsistent to depend on iommu_enabled alone: For a guest
without devices passed through to it, it is of no concern whether the
IOMMU is enabled.
There's one rather special case to take care of: VMX code marks the
LAPIC access page as MMIO. The added assertion needs to take this into
consideration, and the subsequent handling of the direct MMIO case was
inconsistent too: That page would have been WB in the absence of an
IOMMU, but UC in the presence of it, while in fact the cachabilty of
this page is entirely unrelated to an IOMMU being in use.
Jan Beulich [Fri, 14 Mar 2014 16:51:39 +0000 (17:51 +0100)]
x86/HVM: fix memory type merging in epte_get_entry_emt()
Using the minimum numeric value of guest and host specified memory
types is too simplistic - it works only correctly for a subset of
types. It is in particular the WT/WP combination that needs conversion
to UC if the two types conflict.
Dongxiao Xu [Fri, 14 Mar 2014 16:51:07 +0000 (17:51 +0100)]
x86/hvm: refine the judgment on IDENT_PT for EMT
When trying to get the EPT EMT type, the judgment on
HVM_PARAM_IDENT_PT is not correct which always returns WB type if
the parameter is not set. Remove the related code.
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
We can't fully drop the dependency yet, but we should certainly avoid
overriding cases already properly handled. The reason for this is that
the guest setting up its MTRRs happens _after_ the EPT tables got
already constructed, and no code is in place to propagate this to the
EPT code. Without this check we're forcing the guest to run with all of
its memory uncachable until something happens to re-write every single
EPT entry. But of course this has to be just a temporary solution.
In the same spirit we should defer the "very early" (when the guest is
still being constructed and has no vCPU yet) override to the last
possible point.
Jan Beulich [Fri, 14 Mar 2014 16:50:03 +0000 (17:50 +0100)]
IOMMU: generalize and correct softirq processing during Dom0 device setup
c/s 21039:95f5a4ce8f24 ("VT-d: reduce default verbosity") having put a
call to process_pending_softirqs() in VT-d's domain_context_mapping()
was wrong in two ways: For one we shouldn't be doing this when setting
up a device during DomU assignment. And then - I didn't check whether
that was the case already back then - we shouldn't call that function
with the pcidevs_lock (or in fact any spin lock) held.
Move the "preemption" into generic code, at once dealing with further
actual (too much output elsewhere - particularly on systems with very
many host bridge like devices - having been observed to still cause the
watchdog to trigger when enabled) and potential (other IOMMU code may
also end up being too verbose) issues.
Do the "preemption" once per device actually being set up when in
verbose mode, and once per bus otherwise.
Note that dropping pcidevs_lock around the process_pending_softirqs()
invocation is specifically not a problem here: We're in an __init
function and aren't racing with potential additions/removals of PCI
devices. Not acquiring the lock in setup_dom0_pci_devices() otoh is not
an option, as there are too many places that assert the lock being
held.
Andrew Cooper [Fri, 14 Mar 2014 16:45:52 +0000 (17:45 +0100)]
x86/mce: Reduce boot-time logspam
When booting with "no-mce", the user does not need to be told that "MCE
support [was] disabled by bootparam" for each cpu. Furthermore, a file:line
reference is unnecessary.
Frediano Ziglio [Fri, 14 Mar 2014 16:44:19 +0000 (17:44 +0100)]
x86/MCE: Fix race condition in mctelem_reserve
These lines (in mctelem_reserve)
newhead = oldhead->mcte_next;
if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
are racy. After you read the newhead pointer it can happen that another
flow (thread or recursive invocation) change all the list but set head
with same value. So oldhead is the same as *freelp but you are setting
a new head that could point to whatever element (even already used).
This patch use instead a bit array and atomic bit operations.
Use __func__ to ensure the name remains correct in the future.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Ian Campbell <Ian.Campbell@citrix.com> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
(cherry picked from commit 1671cdeac7da663fb2963f3e587fa279dcd0238b)
Jan Beulich [Thu, 20 Feb 2014 07:41:22 +0000 (08:41 +0100)]
x86: don't drop guest visible state updates when 64-bit PV guest is in user mode
Since 64-bit PV uses separate kernel and user mode page tables, kernel
addresses (as usually provided via VCPUOP_register_runstate_memory_area)
aren't necessarily accessible when the respective updating occurs. Add
logic for toggle_guest_mode() to take care of this (if necessary) the
next time the vCPU switches to kernel mode.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2
master date: 2014-01-23 10:30:08 +0100
Andrew Cooper [Thu, 20 Feb 2014 07:40:24 +0000 (08:40 +0100)]
x86-64/percpu: Force INVALID_PERCPU_AREA into the non-canonical address region
This causes accidental uses of per_cpu() on a pcpu with an INVALID_PERCPU_AREA
to result in a #GF for attempting to access the middle of the non-canonical
virtual address region.
This is preferable to the current behaviour, where incorrect use of per_cpu()
will result in an effective NULL structure dereference which has security
implication in the context of PV guests.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
master commit: 7cfb0053629c4dd1a6f01dc43cca7c0c25b8b7bf
master date: 2013-10-04 12:24:34 +0200
Jan Beulich [Thu, 13 Feb 2014 09:21:42 +0000 (10:21 +0100)]
flask: check permissions first thing in flask_security_set_bool()
Nothing else should be done if the caller isn't permitted to set
boolean values.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
master commit: ebe867052e0f782139147015c4e91b37aa5e68f1
master date: 2014-02-11 11:14:10 +0100
Jan Beulich [Thu, 13 Feb 2014 09:21:02 +0000 (10:21 +0100)]
flask: fix error propagation from flask_security_set_bool()
The function should return an error when flask_security_make_bools()
fails as well as when the input ID is out of range.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
master commit: 31f3620be0e3158c205a3669135f9c4bfa40b1c7
master date: 2014-02-11 11:13:22 +0100
Jan Beulich [Thu, 13 Feb 2014 09:19:20 +0000 (10:19 +0100)]
flask: fix memory leaks
Plus, in the case of security_preserve_bools(), prevent double freeing
in the case of security_get_bools() failing.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
master commit: 57c9f2caf05de41913b3e4eb48c0c3ad6c18dd3f
master date: 2014-02-11 11:11:48 +0100
Jan Beulich [Thu, 13 Feb 2014 09:18:13 +0000 (10:18 +0100)]
AMD IOMMU: fail if there is no southbridge IO-APIC
... but interrupt remapping is requested (with per-device remapping
tables). Without it, the timer interrupt is usually not working.
Inspired by Linux'es "iommu/amd: Work around wrong IOAPIC device-id in
IVRS table" (commit c2ff5cf5294bcbd7fa50f7d860e90a66db7e5059) by Joerg
Roedel <joerg.roedel@amd.com>.
Reported-by: Eric Houby <ehouby@yahoo.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Tested-by: Eric Houby <ehouby@yahoo.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
master commit: 06bbcaf48d09c18a41c482866941ddd5d2846b44
master date: 2014-02-10 10:05:24 +0100
x86/AMD: apply workaround for AMD F16h erratum 792
Workaround for the Erratum will be in BIOSes spun only after
Jan 2014 onwards. But initial production parts shipped in 2013
itself. Since there is a coverage hole, we should carry this fix
in software in case BIOS does not do the right thing or someone
is using old BIOS.
Description:
Processor does not ensure DRAM scrub read/write sequence is atomic wrt
accesses to CC6 save state area. Therefore if a concurrent scrub
read/write access is to same address the entry may appear as if it is
not written. This quirk applies to Fam16h models 00h-0Fh
See "Revision Guide" for AMD F16h models 00h-0fh, document 51810 rev.
3.04, Nov 2013.
Equivalent Linux patch link:
http://marc.info/?l=linux-kernel&m=139066012217149&w=2
Tested the patch on Fam16h server platform and it works fine.
Signed-off-by: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Corrected checking for boot CPU. Made warning message conditional.
Compacted warning message text. Moved comment to commit message.
Jan Beulich [Thu, 13 Feb 2014 09:16:13 +0000 (10:16 +0100)]
x86/domctl: don't ignore errors from vmce_restore_vcpu()
What started out as a simple cleanup patch (eliminating the redundant
check of domctl->cmd before copying back the output data) revealed a
bug in the handling of XEN_DOMCTL_set_ext_vcpucontext.
Fix this, retaining the cleanup.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: af172d655c3900822d1f710ac13ee38ee9d482d2
master date: 2014-02-04 09:22:12 +0100
Andrew Cooper [Wed, 22 Jan 2014 17:47:21 +0000 (17:47 +0000)]
libxc: Fix out-of-memory error handling in xc_cpupool_getinfo()
Avoid freeing info then returning it to the caller.
This is XSA-88.
Coverity-ID: 1056192 Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit d883c179a74111a6804baf8cb8224235242a88fc)
libvchan: Fix handling of invalid ring buffer indices
The remote (hostile) process can set ring buffer indices to any value
at any time. If that happens, it is possible to get "buffer space"
(either for writing data, or ready for reading) negative or greater
than buffer size. This will end up with buffer overflow in the second
memcpy inside of do_send/do_recv.
Fix this by introducing new available bytes accessor functions
raw_get_data_ready and raw_get_buffer_space which are robust against
mad ring states, and only return sanitised values.
Proof sketch of correctness:
Now {rd,wr}_{cons,prod} are only ever used in the raw available bytes
functions, and in do_send and do_recv.
The raw available bytes functions do unsigned arithmetic on the
returned values. If the result is "negative" or too big it will be
>ring_size (since we used unsigned arithmetic). Otherwise the result
is a positive in-range value representing a reasonable ring state, in
which case we can safely convert it to int (as the rest of the code
expects).
do_send and do_recv immediately mask the ring index value with the
ring size. The result is always going to be plausible. If the ring
state has become mad, the worst case is that our behaviour is
inconsistent with the peer's ring pointer. I.e. we read or write to
arguably-incorrect parts of the ring - but always parts of the ring.
And of course if a peer misoperates the ring they can achieve this
effect anyway.
So the security problem is fixed.
This is XSA-86.
(The patch is essentially Ian Jackson's work, although parts of the
commit message are by Marek.)
Signed-off-by: Marek Marczykowski-GĂ³recki <marmarek@invisiblethingslab.com> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
master commit: 2efcb0193bf3916c8ce34882e845f5ceb1e511f7
master date: 2014-02-06 16:44:41 +0100
Jan Beulich [Thu, 6 Feb 2014 16:36:40 +0000 (17:36 +0100)]
flask: fix reading strings from guest memory
Since the string size is being specified by the guest, we must range
check it properly before doing allocations based on it. While for the
two cases that are exposed only to trusted guests (via policy
restriction) this just uses an arbitrary upper limit (PAGE_SIZE), for
the FLASK_[GS]ETBOOL case (which any guest can use) the upper limit
gets enforced based on the longest name across all boolean settings.
This is XSA-84.
Reported-by: Matthew Daley <mattd@bugfuzz.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
master commit: 6c79e0ab9ac6042e60434c02e1d99b0cf0cc3470
master date: 2014-02-06 16:33:50 +0100
Frediano Ziglio [Thu, 30 Jan 2014 08:01:01 +0000 (09:01 +0100)]
mce: fix race condition in mctelem_xchg_head
The function (mctelem_xchg_head()) used to exchange mce telemetry
list heads is racy. It may write to the head twice, with the second
write linking to an element in the wrong state.
If there are two threads, T1 inserting on committed list; and T2
trying to consume it.
1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
2. T1 is interrupted after the cmpxchg succeeded.
3. T2 gets the list and changes element A and updates the commit list
head.
4. T1 resumes, reads pointer to prev again and compare with result
from the cmpxchg which succeeded but in the meantime prev changed
in memory.
5. T1 thinks the cmpxchg failed and goes around the loop again,
linking head to A again.
To solve the race use temporary variable for prev pointer.
*linkp (which point to a field in the element) must be updated before
the cmpxchg() as after a successful cmpxchg the element might be
immediately removed and reinitialized.
The wmb() prior to the cmpchgptr() call is not necessary since it is
already a full memory barrier. This wmb() is thus removed.
Andrew Cooper [Thu, 30 Jan 2014 08:00:09 +0000 (09:00 +0100)]
dbg_rw_guest_mem: need to call put_gfn in error path
Using a 1G hvm domU (in grub) and gdbsx:
(gdb) set arch i8086
warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
of GDB. Attempting to continue with the default i8086 settings.
The target architecture is assumed to be i8086
(gdb) target remote localhost:9999
Remote debugging using localhost:9999
Remote debugging from host 127.0.0.1
0x0000d475 in ?? ()
(gdb) x/1xh 0x6ae9168b
Will reproduce this bug.
With a debug=y build you will get:
Assertion '!preempt_count()' failed at preempt.c:37
For a debug=n build you will get a dom0 VCPU hung (at some point) in:
Andrew Cooper [Fri, 17 Jan 2014 15:42:37 +0000 (16:42 +0100)]
kexec: prevent deadlock on reentry to the crash path
In some cases, such as suffering a queued-invalidation timeout while
performing an iommu_crash_shutdown(), Xen can end up reentering the crash
path. Previously, this would result in a deadlock in one_cpu_only(), as the
test_and_set_bit() would fail.
The crash path is not reentrant, and even if it could be made to be so, it is
almost certain that we would fall over the same reentry condition again.
The new code can distinguish a reentry case from multiple cpus racing down the
crash path. In the case that a reentry is detected, return back out to the
nested panic() call, which will maybe_reboot() on our behalf. This requires a
bit of return plumbing back up to kexec_crash().
While fixing this deadlock, also fix up an minor niggle seen recently from a
XenServer crash report. The report was from a Bank 8 MCE, which had managed
to crash on all cpus at once. The result was a lot of stack traces with cpus
in kexec_common_shutdown(), which was infact the inlined version of
one_cpu_only(). The kexec crash path is not a hotpath, so we can easily
afford to prevent inlining for the sake of clarity in the stack traces.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: David Vrabel <david.vrabel@citrix.com>
master commit: 470f58c159410b280627c2ea7798ea12ad93bd7c
master date: 2013-11-27 15:13:48 +0100
Paul Durrant [Fri, 17 Jan 2014 15:41:38 +0000 (16:41 +0100)]
x86/VT-x: Disable MSR intercept for SHADOW_GS_BASE
Intercepting this MSR is pointless - The swapgs instruction does not cause a
vmexit, so the cached result of this is potentially stale after the next guest
instruction. It is correctly saved and restored on vcpu context switch.
Furthermore, 64bit Windows writes to this MSR on every thread context switch,
so interception causes a substantial performance hit.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Tim Deegan <tim@xen.org> Acked-by: Jun Nakajima <jun.nakajima@intel.com>
master commit: a82e98d473fd212316ea5aa078a7588324b020e5
master date: 2013-11-15 11:02:17 +0100
Andrew Cooper [Fri, 17 Jan 2014 15:39:08 +0000 (16:39 +0100)]
x86/ats: Fix parsing of 'ats' command line option
This is really a boolean_param() hidden inside a hand-coded attempt to
replicate boolean_param(), which misses the 'no-' prefix semantics
expected with Xen boolean parameters.
AMD/IOMMU: fix infinite loop due to ivrs_bdf_entries larger than 16-bit value
Certain AMD systems could have upto 0x10000 ivrs_bdf_entries.
However, the loop variable (bdf) is declared as u16 which causes
inifinite loop when parsing IOMMU event log with IO_PAGE_FAULT event.
This patch changes the variable to u32 instead.
Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 81b1c7de2339d2788352b162057e70130803f3cf
master date: 2014-01-07 15:09:42 +0100
Andrew Cooper [Mon, 13 Jan 2014 15:00:28 +0000 (16:00 +0100)]
VTD/DMAR: free() correct pointer on error from acpi_parse_one_atsr()
Free the allocated structure rather than the ACPI table ATS entry.
On further analysis, there is another memory leak. acpi_parse_dev_scope()
could allocate scope->devices, and return with -ENOMEM. All callers of
acpi_parse_dev_scope() would then free the underlying structure, loosing the
pointer.
These errors can only actually be reached through acpi_parse_dev_scope()
(which passes type = DMAR_TYPE), but I am quite surprised Coverity didn't spot
it.
Coverity-ID: 1146949 Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 62d33ca1048f4e08eaeb026c7b79239b4605b636
master date: 2014-01-07 14:59:31 +0100
If {copy_to,clear}_guest_offset() fails, we would leak the domain mappings for
l4 thru l1.
Fixing this requires having conditional unmaps on the faulting path, which in
turn requires explicitly initialising the pointers to NULL because of the
early ENOMEM exit.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <JBeulich@suse.com> Acked-by: Tim Deegan <tim@xen.org>
master commit: 0725f326358cbb2ba7f9626976e346b963d74c37
master date: 2013-12-17 16:38:07 +0100
Jan Beulich [Mon, 13 Jan 2014 14:57:07 +0000 (15:57 +0100)]
ix86: fix linear page table construction in alloc_l2_table()
Slot 0 got updated when slot 3 was meant. The mistake was hidden by
create_pae_xen_mappings() correcting things immediately afterwards
(i.e. before the new entries could get used the first time).
Reported-by: CHENG Yueqiang <yqcheng.2008@phdis.smu.edu.sg> Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org> Acked-by: Tim Deegan <tim@xen.org>
Jan Beulich [Fri, 10 Jan 2014 10:44:10 +0000 (11:44 +0100)]
x86/p2m: restrict auditing to debug builds
... since iterating through all of a guest's pages may take unduly
long.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Ian Campbell <ian.campbell@citrix.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com> Acked-by: Tim Deegan <tim@xen.org>
master commit: 4476d05cf5e8d3880f88ce16649766df67e0791e
master date: 2013-12-13 15:06:11 +0100
Jan Beulich [Fri, 10 Jan 2014 10:42:35 +0000 (11:42 +0100)]
x86/PV: don't commit debug register values early in arch_set_info_guest()
They're being taken care of later (via set_debugreg()), and temporarily
copying them into struct vcpu means that bad values may end up getting
loaded during context switch if the vCPU is already running and the
function errors out between the premature and real commit step, leading
to the same issue that XSA-12 dealt with.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Ian Campbell <ian.campbell@citrix.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
master commit: 398c39b6c18d0b55acfc88f5ee75b3a793e6eeec
master date: 2013-12-11 10:33:19 +0100
Jan Beulich [Fri, 10 Jan 2014 10:41:53 +0000 (11:41 +0100)]
x86/cpuidle: publish new states only after fully initializing them
Since state information coming from Dom0 can arrive at any time, on
any CPU, we ought to make sure that a new state is fully initialized
before the target CPU might be using it.
Once touching that code, also do minor cleanup: A missing (but benign)
"break" and some white space adjustments.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
master commit: 4ca6f9f0377a30755a299cc60a6d44ab6c3b34d0
master date: 2013-12-11 10:30:02 +0100
In the original function, mmap() could be called with a length of -1 if the
second lseek failed and the caller had not provided max_size.
While fixing up this error, improve the logging of other error paths. I know
from personal experience that debugging failures function is rather difficult
given only "xc_dom_malloc_filemap: failed (on file <somefile>)" in the logs.
Andrew Cooper [Mon, 25 Nov 2013 11:05:47 +0000 (11:05 +0000)]
tools/xc_restore: Initialise console and store mfns
If the console or store mfn chunks are not present in the migration stream,
stack junk gets reported for the mfns.
XenServer had a very hard to track down VM corruption issue caused by exactly
this issue. Xenconsoled would connect to a junk mfn and incremented the ring
pointer if the junk happend to look like a valid gfn.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> CC: Keir Fraser <keir@xen.org> CC: Jan Beulich <JBeulich@suse.com>
(cherry picked from commit 0792426b798fd3b39909d618cf8fe8bac30594f4)
Jan Beulich [Tue, 10 Dec 2013 15:21:57 +0000 (16:21 +0100)]
IOMMU: clear "don't flush" override on error paths
Both xenmem_add_to_physmap() and iommu_populate_page_table() each have
an error path that fails to clear that flag, thus suppressing further
flushes on the respective pCPU.
In iommu_populate_page_table() also slightly re-arrange code to avoid
the false impression of the flag in question being guarded by a
domain's page_alloc_lock.
This is CVE-2013-6400 / XSA-80.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Ian Campbell <ian.campbell@citrix.com>
master commit: 552b7fcb9a70f1d4dd0e0cd5fb4d3d9da410104a
master date: 2013-12-10 16:10:37 +0100
Andrew Cooper [Mon, 9 Dec 2013 13:47:43 +0000 (14:47 +0100)]
x86/boot: fix BIOS memory corruption on certain IBM systems
IBM System x3530 M4 BIOSes (including the latest available at the time of this
patch) will corrupt a byte at physical address 0x105ff1 to the value of 0x86
if %esp has the value 0x00080000 when issuing an `int $0x15 (ax=0xec00)` to
inform the system about our intended operating mode.
Xen gets unhappy when the bootloader has placed it's .text section in over
this specific region of RAM.
After dropping into 16bit mode, clear all 32 bits of %esp, and for the BIOS
call already documented to be affected by BIOS bugs clear all GPRs.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
master commit: 1ed76797439e384de18fcd6810bd4743d4f38b1e
master date: 2013-12-06 11:28:00 +0100
Daniel Kiper [Mon, 9 Dec 2013 13:47:07 +0000 (14:47 +0100)]
x86: fix early boot command line parsing
There is no reliable way to encode NUL character as a character so encode
it as a number. Read: http://sourceware.org/binutils/docs/as/Characters.html.
Octal and hex encoding do not work on at least one system (GNU assembler
version 2.22 (x86_64-linux-gnu) using BFD version (GNU Binutils for Debian) 2.22).
Without this fix e.g. no-real-mode option at the end of xen.gz command line
is not detected.
Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
master commit: dc37e0bfffc673f4bdce1d69ad86098bfb0ab531
master date: 2013-12-04 13:26:37 +0100
Apart from the Coverity-detected lock order reversal (a domain's
page_alloc_lock taken with the heap lock already held), calling
put_page() with heap_lock is a bad idea too (as a possible descendant
from put_page() is free_heap_pages(), which wants to take this very
lock).
From all I can tell the region over which heap_lock was held was far
too large: All we need to protect are the call to mark_page_offline()
and reserve_heap_page() (and I'd even put under question the need for
the former). Hence by slightly re-arranging the if/else-if chain we
can drop the lock much earlier, at once no longer covering the two
put_page() invocations.
Once at it, do a little bit of other cleanup: Put the "pod_replace"
code path inline rather than at its own label, and drop the effectively
unused variable "ret".
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Tim Deegan <tim@xen.org> Acked-by: Keir Fraser <keir@xen.org>
master commit: d4837a56da4a59259dd0cf9f3bdc073159d81d7a
master date: 2013-12-03 12:40:57 +0100
The ptr calculation shall take the offset into the page into account
when ptr is valid.
Reported regression on NetBSD's port-xen with last known working libxen
being rev 2.9. This corrupts the kernel symbol table when the table is
not loaded on a page boundary.
Issue was tracked down by FastIce and Jeff Rizzo. See also
http://mail-index.netbsd.org/port-xen/2013/10/16/msg008088.html
Signed-off-by: Jean-Yves Migeon <jym@NetBSD.org> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Ian Campbell <ian.campbell@citrix.com>
master commit: cb08944a482a5e80a3ff1113f0735761cc4c6cb8
master date: 2013-11-29 11:07:01 +0000
Feng Wu [Mon, 9 Dec 2013 13:45:00 +0000 (14:45 +0100)]
x86: properly handle MSI-X unmask operation from guests
For a pass-through device with MSI-x capability, when guest tries
to unmask the MSI-x interrupt for the passed through device, xen
doesn't clear the mask bit for MSI-x in hardware in the following
scenario, which will cause network disconnection:
1. Guest masks the MSI-x interrupt
2. Guest updates the address and data for it
3. Guest unmasks the MSI-x interrupt (This is the problematic step)
In the step #3 above, Xen doesn't handle it well. When guest tries
to unmask MSI-X interrupt, it traps to Xen, Xen just returns to Qemu
if it notices that address or data has been modified by guest before,
then Qemu will update Xen with the latest value of address/data by
hypercall. However, in this whole process, the MSI-X interrupt unmask
operation is missing, which means Xen doesn't clear the mask bit in
hardware for the MSI-X interrupt, so it remains disabled, that is why
it loses the network connection.
This patch fixes this issue.
Signed-off-by: Feng Wu <feng.wu@intel.com>
Only latch the address if the guest really is unmasking the entry.
Clean up the entire change.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 74fd0036deb585a139b63b26db025805ecedc37a
master date: 2013-11-27 15:15:43 +0100
Liu Jinsong [Mon, 9 Dec 2013 13:43:34 +0000 (14:43 +0100)]
VMX: fix cr0.cd handling
This patch solves XSA-60 security hole:
1. For guest w/o VT-d, and for guest with VT-d but snooped, Xen need
do nothing, since hardware snoop mechanism has ensured cache coherency.
2. For guest with VT-d but non-snooped, cache coherency can not be
guaranteed by h/w snoop, therefore it need emulate UC type to guest:
2.1). if it works w/ Intel EPT, set guest IA32_PAT fields as UC so that
guest memory type are all UC.
2.2). if it works w/ shadow, drop all shadows so that any new ones would
be created on demand w/ UC.
This patch also fix a bug of shadow cr0.cd setting. Current shadow has a
small window between cache flush and TLB invalidation, resulting in possilbe
cache pollution. This patch pause vcpus so that no vcpus context involved
into the window.
This is CVE-2013-2212 / XSA-60.
Signed-off-by: Liu Jinsong <jinsong.liu@intel.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Tim Deegan <tim@xen.org> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Jun Nakajima <jun.nakajima@intel.com> Acked-by: Keir Fraser <keir@xen.org>
master commit: 62652c00efa55fb45374bcc92f7d96fc411aebb2
master date: 2013-11-06 10:12:36 +0100
Liu Jinsong [Mon, 9 Dec 2013 13:41:44 +0000 (14:41 +0100)]
VMX: remove the problematic set_uc_mode logic
XSA-60 security hole comes from the problematic vmx_set_uc_mode.
This patch remove vmx_set_uc_mode logic, which will be replaced by
PAT approach at later patch.
This is CVE-2013-2212 / XSA-60.
Signed-off-by: Liu Jinsong <jinsong.liu@intel.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Tim Deegan <tim@xen.org> Acked-by: Jun Nakajima <jun.nakajima@intel.com>
master commit: 1c84d046735102e02d2df454ab07f14ac51f235d
master date: 2013-11-06 10:12:00 +0100
Liu Jinsong [Mon, 9 Dec 2013 13:40:51 +0000 (14:40 +0100)]
VMX: disable EPT when !cpu_has_vmx_pat
Recently Oracle developers found a Xen security issue as DOS affecting,
named as XSA-60. Please refer http://xenbits.xen.org/xsa/advisory-60.html
Basically it involves how to handle guest cr0.cd setting, which under
some environment it consumes much time resulting in DOS-like behavior.
This is a preparing patch for fixing XSA-60. Later patch will fix XSA-60
via PAT under Intel EPT case, which depends on cpu_has_vmx_pat.
This is CVE-2013-2212 / XSA-60.
Signed-off-by: Liu Jinsong <jinsong.liu@intel.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Tim Deegan <tim@xen.org> Acked-by: Jun Nakajima <jun.nakajima@intel.com>
master commit: c13b0d65ddedd74508edef5cd66defffe30468fc
master date: 2013-11-06 10:11:18 +0100
Reported-by: David Binderman <dcb314@hotmail.com> Signed-off-by: Tim Deegan <tim@xen.org>
Use _SEGMENT_* instead of plain numbers and adjust a comment.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit: 6ed4bfbabd487b41021caa7ed03cee1f00ecbabf
master date: 2013-11-26 09:54:21 +0100
Liu Jinsong [Mon, 2 Dec 2013 14:56:09 +0000 (15:56 +0100)]
x86/xsave: fix nonlazy state handling
Nonlazy xstates should be xsaved each time when vcpu_save_fpu.
Operation to nonlazy xstates will not trigger #NM exception, so
whenever vcpu scheduled in it got restored and whenever scheduled
out it should get saved.
Currently this bug affects AMD LWP feature, and later Intel MPX
feature. With the bugfix both LWP and MPX will work fine.
Signed-off-by: Liu Jinsong <jinsong.liu@intel.com>
Furthermore, during restore we also need to set nonlazy_xstate_used
according to the incoming accumulated XCR0.
Also adjust the changes to i387.c such that there won't be a pointless
clts()/stts() pair.
David Vrabel [Mon, 2 Dec 2013 14:55:16 +0000 (15:55 +0100)]
x86/crash: disable the watchdog NMIs on the crashing cpu
nmi_shootdown_cpus() is called during a crash to park all the other
CPUs. This changes the NMI trap handlers which means there's no point
in having the watchdog still running.
This also disables the watchdog before executing any crash kexec image
and prevents the image from receiving unexpected NMIs.
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
PVOps Linux as a kexec image shoots itself in the foot otherwise.
On a Core2 system, Linux declares a firmware bug and tries to invert some bits
in the performance counter register. It ends up setting the number of retired
instructions to generate another NMI to fewer instructions than the NMI
interrupt path itself, and ceases to make any useful progress.
The call to disable_lapic_nmi_watchdog() must be this late into the kexec path
to be sure that this cpu is the one which will execute the kexec image.
Otherwise there are race conditions where the NMIs might be disabled on the
wrong cpu, resulting in the kexec image still receiving NMIs.
x86/hvm: reset TSC to 0 after domain resume from S3
Host S3 implicitly resets the host TSC to 0, but the tsc offset for hvm
domains is not recalculated when they resume, causing it to go into
negative values. In Linux guest using tsc clocksource, this results in
a hang after wrap back to positive values since the tsc clocksource
implementation expects it reset.