xen/sched: add is_running indicator to struct sched_unit
Add an is_running indicator to struct sched_unit which will be set
whenever the unit is being scheduled. Switch scheduler code to use
unit->is_running instead of vcpu->is_running for scheduling decisions.
At the same time introduce a state_entry_time field in struct
sched_unit being updated whenever the is_running indicator is changed.
Use that new field in the schedulers instead of the similar vcpu field.
Add the following helpers using a sched_unit as input instead of a
vcpu:
- is_idle_unit() similar to is_idle_vcpu()
- is_unit_online() similar to is_vcpu_online() (returns true when any
of its vcpus is online)
- unit_runnable() like vcpu_runnable() (returns true if any of its
vcpus is runnable)
- sched_set_res() to set the current scheduling resource of a unit
- sched_unit_master() to get the current processor of a unit (returns
the master_cpu of the scheduling resource of a unit)
- sched_{set|clear}_pause_flags[_atomic]() to modify pause_flags of the
associated vcpu(s) (modifies the pause_flags of all vcpus of the
unit)
- sched_idle_unit() to get the sched_unit pointer of the idle vcpu of a
specific physical cpu
xen/sched: move some per-vcpu items to struct sched_unit
Affinities are scheduler specific attributes, they should be per
scheduling unit. So move all affinity related fields in struct vcpu
to struct sched_unit. While at it switch affinity related functions in
sched-if.h to use a pointer to sched_unit instead to vcpu as parameter.
The affinity_broken flag must be kept per vcpu as it is related to
guest actions on specific vcpus. When support of multiple vcpus per
sched_unit is being added, a unit is regarded as being subject to
"broken affinity" when any of its vcpus has the affinity_broken flag
set.
xen/sched: move per cpu scheduler private data into struct sched_resource
This prepares support of larger scheduling granularities, e.g. core
scheduling.
While at it move sched_has_urgent_vcpu() from include/asm-x86/cpuidle.h
into sched.h removing the need for including sched-if.h in cpuidle.h.
For that purpose remobe urgent_count from the scheduler private data
and make it a plain percpu variable.
xen/sched: switch schedule_data.curr to point at sched_unit
In preparation of core scheduling let the percpu pointer
schedule_data.curr point to a strct sched_unit instead of the related
vcpu. At the same time rename the per-vcpu scheduler specific structs
to per-unit ones.
xen/sched: let pick_cpu return a scheduler resource
Instead of returning a physical cpu number let pick_cpu() return a
scheduler resource instead. Rename pick_cpu() to pick_resource() to
reflect that change.
Add a scheduling abstraction layer between physical processors and the
schedulers by introducing a struct sched_resource. Each scheduler unit
running is active on such a scheduler resource. For the time being
there is one struct sched_resource per cpu, but in future there might
be one for each core or socket only.
xen/sched: build a linked list of struct sched_unit
In order to make it easy to iterate over sched_unit elements of a
domain, build a single linked list and add an iterator for it. The new
list is guarded by the same mechanisms as the vcpu linked list as it
is modified only via vcpu_create() or vcpu_destroy().
For completeness add another iterator for_each_sched_unit_vcpu() which
will iterate over all vcpus of a sched_unit (right now only one). This
will be needed later for larger scheduling granularity (e.g. cores).
xen/sched: use new sched_unit instead of vcpu in scheduler interfaces
In order to prepare core- and socket-scheduling use a new struct
sched_unit instead of struct vcpu for interfaces of the different
schedulers.
Rename the per-scheduler functions insert_vcpu and remove_vcpu to
insert_unit and remove_unit to reflect the change of the parameter.
In the schedulers rename local functions switched to sched_unit, too.
Rename alloc_vdata and free_vdata functions to alloc_udata and
free_udata.
For now this new struct will contain a domain, a vcpu pointer and a
unit_id only and is allocated at vcpu creation time.
Simon Gaiser [Fri, 27 Sep 2019 13:04:08 +0000 (15:04 +0200)]
x86: allow stubdom access to irq created for msi
Stubdomains need to be given sufficient privilege over the guest which it
provides emulation for in order for PCI passthrough to work correctly.
When a HVM domain try to enable MSI, QEMU in stubdomain calls
PHYSDEVOP_map_pirq, but later it needs to call XEN_DOMCTL_bind_pt_irq as
part of xc_domain_update_msi_irq. Give the stubdomain enough permissions
over the mapped interrupt in order to bind it successfully to it's
target domain.
This is not needed for PCI INTx, because IRQ in that case is known
beforehand and the stubdomain is given permissions over this IRQ by
libxl__device_pci_add (there's a do_pci_add against the stubdomain).
create_irq() already grant IRQ access to hardware_domain, with
assumption the device model lives there.
Modify create_irq() to take additional parameter, whether to grant
permissions to the domain creating the IRQ, which may be dom0 or a
stubdomain. Do this instead of granting access always to
hardware_domain. Save ID of the domain given permission, to revoke it in
destroy_irq() - easier and cleaner than replaying logic of create_irq()
parameter. Use domid instead of actual reference to the domain,
because it might get destroyed before destroying IRQ (stubdomain is
destroyed before its target domain). And it is not an issue,
because IRQ permissions live within domain structure, so destroying
a domain also implicitly revoke the permission. Potential domid
reuse is detected by checking if that domain does have permission
over the IRQ being destroyed.
Then, adjust all callers to provide the parameter. In case of Xen
internal allocations, set it to false, but for domain accessible
interrupt set it to true.
Inspired by https://github.com/OpenXT/xenclient-oe/blob/5e0e7304a5a3c75ef01240a1e3673665b2aaf05e/recipes-extended/xen/files/stubdomain-msi-irq-access.patch by Eric Chanudet <chanudete@ainfosec.com>.
Signed-off-by: Simon Gaiser <simon@invisiblethingslab.com> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com>
xen/arm: Restrict "p2m_ipa_bits" according to the IOMMU requirements
There is a strict requirement for the IOMMU which wants to share
the P2M table with the CPU. The IOMMU's Stage-2 input size must be equal
to the P2M IPA size. It is not a problem when the IOMMU can support
all values the CPU supports. In that case, the IOMMU driver would just
use any "p2m_ipa_bits" value as is. But, there are cases when not.
In order to make P2M sharing possible on the platforms which
IOMMUs have a limitation in maximum Stage-2 input size introduce
the following logic.
First initialize the IOMMU subsystem and gather requirements regarding
the maximum IPA bits supported by each IOMMU device to figure out
the minimum value among them. In the P2M code, take into the account
the IOMMU requirements and choose suitable "pa_range" according
to the restricted "p2m_ipa_bits".
microcode_update_lock is to prevent logic threads of a same core from
updating microcode at the same time. But due to using a global lock, it
also prevented parallel microcode updating on different cores.
Remove this lock in order to update microcode in parallel. It is safe
because we have already ensured serialization of sibling threads at the
caller side.
1.For late microcode update, do_microcode_update() ensures that only one
sibiling thread of a core can update microcode.
2.For microcode update during system startup or CPU-hotplug,
microcode_mutex() guarantees update serialization of logical threads.
3.get/put_cpu_bitmaps() prevents the concurrency of CPU-hotplug and
late microcode update.
Note that printk in apply_microcode() and svm_host_osvm_init() (for AMD
only) are still processed sequentially.
Signed-off-by: Chao Gao <chao.gao@intel.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
This patch ports microcode improvement patches from linux kernel.
Before you read any further: the early loading method is still the
preferred one and you should always do that. The following patch is
improving the late loading mechanism for long running jobs and cloud use
cases.
Gather all cores and serialize the microcode update on them by doing it
one-by-one to make the late update process as reliable as possible and
avoid potential issues caused by the microcode update.
x86/microcode: reduce memory allocation and copy when creating a patch
To create a microcode patch from a vendor-specific update,
allocate_microcode_patch() copied everything from the update.
It is not efficient. Essentially, we just need to go through
ucodes in the blob, find the one with the newest revision and
install it into the microcode_patch. In the process, buffers
like mc_amd, equiv_cpu_table (on AMD side), and mc (on Intel
side) can be reused. microcode_patch now is allocated after
it is sure that there is a matching ucode.
Signed-off-by: Chao Gao <chao.gao@intel.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
x86/microcode: unify ucode loading during system bootup and resuming
During system bootup and resuming, CPUs just load the cached ucode.
So one unified function microcode_update_one() is introduced. It
takes a boolean to indicate whether ->start_update should be called.
Since early_microcode_update_cpu() is only called on BSP (APs call
the unified function), start_update is always true and so remove
this parameter.
There is a functional change: ->start_update is called on BSP and
->end_update_percpu is called during system resuming. They are not
invoked by previous microcode_resume_cpu().
Signed-off-by: Chao Gao <chao.gao@intel.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
x86/microcode: split out apply_microcode() from cpu_request_microcode()
During late microcode loading, apply_microcode() is invoked in
cpu_request_microcode(). To make late microcode update more reliable,
we want to put the apply_microcode() into stop_machine context. So
we split out it from cpu_request_microcode(). In general, for both
early loading on BSP and late loading, cpu_request_microcode() is
called first to get the matching microcode update contained by
the blob and then apply_microcode() is invoked explicitly on each
cpu in common code.
Given that all CPUs are supposed to have the same signature, parsing
microcode only needs to be done once. So cpu_request_microcode() is
also moved out of microcode_update_cpu().
In some cases (e.g. a broken bios), the system may have multiple
revisions of microcode update. So we would try to load a microcode
update as long as it covers current cpu. And if a cpu loads this patch
successfully, the patch would be stored into the patch cache.
Note that calling ->apply_microcode() itself doesn't require any
lock being held. But the parameter passed to it may be protected
by some locks. E.g. microcode_update_cpu() acquires microcode_mutex
to avoid microcode_cache being updated by others.
Signed-off-by: Chao Gao <chao.gao@intel.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Paul Durrant [Fri, 27 Sep 2019 12:07:42 +0000 (14:07 +0200)]
x86/iommu: fix PVH dom0 settings
PVH dom0 must operate with the iommu settings in 'strict' mode i.e. only the
domain's own pages will be mapped in the IOMMU. The check_hwdom_reqs() is
supposed to ensure this. Unfortunately the test for a PVH dom0 is made
using paging_mode_translate() and, when commit f89f5558 "remove late
(on-demand) construction of IOMMU page tables" moved the call of
check_hwdom_reqs() from iommu_hwdom_init() to iommu_domain_init(), that
test became ineffective (because iommu_domain_init() is called before
paging_enable()).
This patch replaces the test of paging_mode_translate() with a test of
hap_enabled(), and also verifies 'strict' mode is turned on in
arch_iommu_check_autotranslated_hwdom().
Reported-by: Roger Pau Monne <roger.pau@citrix.com> Signed-off-by: Paul Durrant <paul.durrant@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Release-acked-by: Juergen Gross <jgross@suse.com>
Commit 6338c9ead9ff9ef6 ("debugtrace: add per-cpu buffer option") had
a rebase error when using per-cpu buffers: the global buffer address
would always be set to the one of the last per-cpu buffer allocated.
The result would be that when dumping the buffers the last cpu's buffer
is always shown as empty as those entries are printed in the global
buffer's dump already.
Fix that.
Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
vcpu_force_reschedule() is only used for modifying the periodic timer
of a vcpu. Forcing a vcpu to give up the physical cpu for that purpose
is kind of brutal.
So instead of doing the reschedule dance just operate on the timer
directly. By protecting periodic timer modifications against concurrent
timer activation via a per-vcpu lock it is even no longer required to
bother the target vcpu at all for updating its timer.
Even with the additional lock there is not more serialization involved
compared to the current solution, as today's de-scheduling the vcpu is
requiring to take the schedule lock, which has a much higher contention
probability than the new lock.
Rename the function to vcpu_set_periodic_timer() as this now reflects
the functionality.
The arinc653 scheduler's free_vdata() function is missing proper
locking: as it is modifying the scheduler's private vcpu list it needs
to take the scheduler lock during that operation.
sched: don't let XEN_RUNSTATE_UPDATE leak into vcpu_runstate_get()
vcpu_runstate_get() should never return a state entry time with
XEN_RUNSTATE_UPDATE set. To avoid this let update_runstate_area()
operate on a local runstate copy.
As it is required to first set the XEN_RUNSTATE_UPDATE indicator in
guest memory, then update all the runstate data, and then at last
clear the XEN_RUNSTATE_UPDATE again it is much less effort to have
a local copy of the runstate data instead of keeping only a copy of
state_entry_time.
This problem was introduced with commit 2529c850ea48f036 ("add update
indicator to vcpu_runstate_info").
Reported-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Julien Grall <julien.grall@arm.com>
Julien Grall [Mon, 12 Aug 2019 15:30:37 +0000 (16:30 +0100)]
xen/arm32: head: Use a page mapping for the 1:1 mapping in create_page_tables()
At the moment the function create_page_tables() will use 1GB/2MB
mapping for the identity mapping. As we don't know what is present
before and after Xen in memory, we may end up to map
device/reserved-memory with cacheable memory. This may result to
mismatched attributes as other users may access the same region
differently.
To prevent any issues, we should only map the strict minimum in the
1:1 mapping. A check in xen.lds.S already guarantees anything
necessary for turning on the MMU fits in a page (at the moment 4K).
As only one page will be mapped for the 1:1 mapping, it is necessary
to pre-allocate a page for the 3rd level table.
Julien Grall [Mon, 12 Aug 2019 15:30:25 +0000 (16:30 +0100)]
xen/arm32: head: Introduce macros to create table and mapping entry
At the moment, any update to the boot-pages are open-coded. This is
making more difficult to understand the logic of a function as each
update roughly requires 6 instructions.
To ease the readability, two new macros are introduced:
- create_table_entry: Create a page-table entry in a given table.
This can work at any level.
- create_mapping_entry: Create a mapping entry in a given table.
None of the users will require to map at any other level than 3rd
(i.e page granularity). So the macro is only supporting 3rd level
mapping.
Unlike arm64, there are no easy way to have a PC relative address within
the range -/+4GB. In order to have the possibility to use the macro in
context with MMU on/off, the user needs to tell the state of the MMU.
Lastly, take the opportunity to replace open-coded version in
setup_fixmap() by the two new macros. The ones in create_page_tables()
will be replaced in a follow-up patch.
Julien Grall [Thu, 27 Jun 2019 14:08:28 +0000 (15:08 +0100)]
xen/arm64: head: Use a page mapping for the 1:1 mapping in create_page_tables()
At the moment the function create_page_tables() will use 1GB/2MB
mapping for the identity mapping. As we don't know what is present
before and after Xen in memory, we may end up to map
device/reserved-memory with cacheable memory. This may result to
mismatched attributes as other users may access the same region
differently.
To prevent any issues, we should only map the strict minimum in the
1:1 mapping. A check in xen.lds.S already guarantees anything
necessary for turning on the MMU fits in a page (at the moment 4K).
As only one page will be mapped for the 1:1 mapping, it is necessary
to pre-allocate a page for the 3rd level table.
Julien Grall [Mon, 17 Jun 2019 13:25:11 +0000 (14:25 +0100)]
xen/arm64: head: Introduce macros to create table and mapping entry
At the moment, any update to the boot-pages are open-coded. This is
making more difficult to understand the logic of a function as each
update roughly requires 6 instructions.
To ease the readability, two new macros are introduced:
- create_table_entry: Create a page-table entry in a given table.
This can work at any level.
- create_mapping_entry: Create a mapping entry in a given table.
None of the users will require to map at any other level than 3rd
(i.e page granularity). So the macro is only supporting 3rd level
mapping.
Furthermore, the two macros are capable to work independently of the
state of the MMU.
Lastly, take the opportunity to replace open-coded version in
setup_fixmap() by the two new macros. The ones in create_page_tables()
will be replaced in a follow-up patch.
xen/arm32: head: Rework and document setup_fixmap()
At the moment, the fixmap table is only hooked when earlyprintk is used.
This is fine today because in C land, the fixmap is not used by anyone
until the the boot CPU is switching to the runtime page-tables.
In the future, the boot CPU will not switch between page-tables to
avoid TLB incoherency. Thus, the fixmap table will need to be always
hooked beofre any use. Let's start doing it now in setup_fixmap().
Lastly, document the behavior and the main registers usage within the
function.
xen/arm32: head: Remove 1:1 mapping as soon as it is not used
The 1:1 mapping may clash with other parts of the Xen virtual memory
layout. At the moment, Xen is handling the clash by only creating a
mapping to the runtime virtual address before enabling the MMU.
The rest of the mappings (such as the fixmap) will be mapped after the
MMU is enabled. However, the code doing the mapping is not safe as it
replace mapping without using the Break-Before-Make sequence.
As the 1:1 mapping can be anywhere in the memory, it is easier to remove
all the entries added as soon as the 1:1 mapping is not used rather than
adding the Break-Before-Make sequence everywhere.
It is difficult to track where exactly the 1:1 mapping was created
without a full rework of create_page_tables(). Instead, introduce a new
function remove_identity_mapping() will look where is the top-level entry
for the 1:1 mapping and remove it.
The new function is only called for the boot CPU. Secondary CPUs will
switch directly to the runtime page-tables so there are no need to
remove the 1:1 mapping. Note that this still doesn't make the Secondary
CPUs path safe but it is not making it worst.
Note that the TLB flush sequence is same sequence as described in
asm-arm/arm32/flushtlb.h with a twist. Per G5-5532 ARM DDI 0487D.a,
a dsb nsh is sufficient for local flushed. Note the section is from the
AArch32 Armv8 spec, I wasn't able to find the same exact section in the
Armv7 spec but this is dotted as local operations only applies to
non-shareable domain. This was missed while reworking the header and
therefore a more conservative way was adopted.
Julien Grall [Fri, 7 Jun 2019 21:09:32 +0000 (22:09 +0100)]
xen/arm64: head: Rework and document setup_fixmap()
At the moment, the fixmap table is only hooked when earlyprintk is used.
This is fine today because in C land, the fixmap is not used by anyone
until the the boot CPU is switching to the runtime page-tables.
In the future, the boot CPU will not switch between page-tables to
avoid TLB incoherency. Thus, the fixmap table will need to be always
hooked before any use. Let's start doing it now in setup_fixmap().
Lastly, document the behavior and the main registers usage within the
function.
Julien Grall [Sun, 9 Jun 2019 17:04:40 +0000 (18:04 +0100)]
xen/arm64: head: Remove 1:1 mapping as soon as it is not used
The 1:1 mapping may clash with other parts of the Xen virtual memory
layout. At the moment, Xen is handling the clash by only creating a
mapping to the runtime virtual address before enabling the MMU.
The rest of the mappings (such as the fixmap) will be mapped after the
MMU is enabled. However, the code doing the mapping is not safe as it
replace mapping without using the Break-Before-Make sequence.
As the 1:1 mapping can be anywhere in the memory, it is easier to remove
all the entries added as soon as the 1:1 mapping is not used rather than
adding the Break-Before-Make sequence everywhere.
It is difficult to track where exactly the 1:1 mapping was created
without a full rework of create_page_tables(). Instead, introduce a new
function remove_identity_mapping() will look where is the top-level entry
for the 1:1 mapping and remove it.
The new function is only called for the boot CPU. Secondary CPUs will
switch directly to the runtime page-tables so there are no need to
remove the 1:1 mapping. Note that this still doesn't make the Secondary
CPUs path safe but it is not making it worst.
Note that the TLB flush sequence is same sequence as described in
asm-arm/arm32/flushtlb.h with a twist. Per D5-2530 ARM DDI 0487D.a,
a dsb nsh is sufficient for local flush. This part of the Arm Arm
was missed while reworking the header and therefore a more conservative
way was adopted.
The IPMMU-VMSA is VMSA-compatible I/O Memory Management Unit (IOMMU)
which provides address translation and access protection functionalities
to processing units and interconnect networks.
Please note, current driver is supposed to work only with newest
R-Car Gen3 SoCs revisions which IPMMU hardware supports stage 2 translation
table format and is able to use CPU's P2M table as is if one is
3-level page table (up to 40 bit IPA).
The major differences compare to the Linux driver are:
1. Stage 1/Stage 2 translation. Linux driver supports Stage 1
translation only (with Stage 1 translation table format). It manages
page table by itself. But Xen driver supports Stage 2 translation
(with Stage 2 translation table format) to be able to share the P2M
with the CPU. Stage 1 translation is always bypassed in Xen driver.
So, Xen driver is supposed to be used with newest R-Car Gen3 SoC revisions
only (H3 ES3.0, M3-W+, etc.) which IPMMU H/W supports stage 2 translation
table format.
2. AArch64 support. Linux driver uses VMSAv8-32 mode, while Xen driver
enables Armv8 VMSAv8-64 mode to cover up to 40 bit input address.
3. Context bank (sets of page table) usage. In Xen, each context bank is
mapped to one Xen domain. So, all devices being pass throughed to the
same Xen domain share the same context bank.
4. IPMMU device tracking. In Xen, all IOMMU devices are managed
by single driver instance. So, driver uses global list to keep track
of registered IPMMU devices.
The main puprose of this patch is to add a way to register DT device
(which is behind the IOMMU) using the generic IOMMU DT bindings [1]
before assigning that device to a domain.
So, this patch adds new "iommu_add_dt_device" API for adding DT device
to the IOMMU using generic IOMMU DT bindings and previously added
"iommu_fwspec" support. As devices can be assigned to the hardware domain
and other domains this function is called from two places: handle_device()
and iommu_do_dt_domctl().
Besides that, this patch adds new "dt_xlate" callback (borrowed from
Linux "of_xlate") for providing the driver with DT IOMMU specifier
which describes the IOMMU master interfaces of that device (device IDs, etc).
According to the generic IOMMU DT bindings the context of required
properties for IOMMU device/master node (#iommu-cells, iommus) depends
on many factors and is really driver depended thing.
Please note, all IOMMU drivers which support generic IOMMU DT bindings
should use "dt_xlate" and "add_device" callbacks.
We need to have some abstract way to add new device to the IOMMU
based on the generic IOMMU DT bindings [1] which can be used for
both DT (right now) and ACPI (in future).
For that reason we can borrow the idea used in Linux these days
called "iommu_fwspec". Having this in, it will be possible
to configure IOMMU master interfaces of the device (device IDs)
from a single common place and avoid keeping almost identical look-up
implementations in each IOMMU driver.
There is no need to port the whole implementation of "iommu_fwspec"
to Xen, we could, probably, end up with a much simpler solution,
some "stripped down" version which fits our requirements.
So, this patch adds the following:
1. A common structure "iommu_fwspec" to hold the the per-device
firmware data
2. New member "iommu_fwspec" of struct device
3. Functions/helpers to deal with "dev->iommu_fwspec"
It should be noted that in comparison of the original "iommu_fwspec"
Xen's variant doesn't contain some fields, which are not really
needed at the moment (ops, flag) and "iommu_fwnode" field was replaced
by "iommu_dev" to avoid porting a lot of code (to support "fwnode_handle")
with little benefit.
Also, while here, introduce xmalloc(xzalloc)_flex_struct() to
allocate space for a structure with a flexible array of typed objects.
Suggested-by: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> CC: Andrew Cooper <andrew.cooper3@citrix.com> CC: George Dunlap <George.Dunlap@eu.citrix.com> CC: Ian Jackson <ian.jackson@eu.citrix.com> CC: Julien Grall <julien.grall@arm.com> CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> CC: Stefano Stabellini <sstabellini@kernel.org> CC: Tim Deegan <tim@xen.org> CC: Wei Liu <wl@xen.org>
This patch introduces type-unsafe function which besides
re-allocation handles the following corner cases:
1. if requested size is zero, it will behave like xfree
2. if incoming pointer is not valid (NULL or ZERO_BLOCK_PTR),
it will behave like xmalloc
If both pointer and size are valid the function will re-allocate and
copy only if requested size and alignment don't fit in already
allocated space.
Subsequent patch will add type-safe helper macros.
Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
[julien: comestic changes] Acked-by: Julien Grall <julien.grall@arm.com> CC: Andrew Cooper <andrew.cooper3@citrix.com> CC: George Dunlap <George.Dunlap@eu.citrix.com> CC: Ian Jackson <ian.jackson@eu.citrix.com> CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> CC: Stefano Stabellini <sstabellini@kernel.org> CC: Tim Deegan <tim@xen.org> CC: Wei Liu <wl@xen.org> CC: Paul Durrant <paul.durrant@citrix.com>
iommu/arm: Add ability to handle deferred probing request
This patch adds minimal required support to General IOMMU framework
to be able to handle a case when IOMMU driver requesting deferred
probing for a device.
In order not to pull Linux's error code (-EPROBE_DEFER) to Xen
we have chosen -EAGAIN to be used for indicating that device
probing is deferred.
This is needed for the upcoming IPMMU driver which may request
deferred probing depending on what device will be probed the first
(there is some dependency between these devices, Root device must be
registered before Cache devices. If not the case, driver will deny
further Cache device probes until Root device is registered).
As we can't guarantee a fixed pre-defined order for the device nodes
in DT, we need to be ready for the situation where devices being
probed in "any" order.
iommu/arm: Add iommu_helpers.c file to keep common for IOMMUs stuff
Introduce a separate file to keep various helpers which could be used
by more than one IOMMU driver in order not to duplicate code.
The first candidates to be moved to the new file are SMMU driver's
"map_page/unmap_page" callbacks. These callbacks neither contain any
SMMU specific info nor perform any SMMU specific actions and are going
to be the same across all IOMMU drivers which H/W IP shares P2M
with the CPU like SMMU does.
So, move callbacks to iommu_helpers.c for the upcoming IPMMU driver
to be able to re-use them.
Paul Durrant [Thu, 26 Sep 2019 10:03:08 +0000 (11:03 +0100)]
iommu: avoid triggering ASSERT_UNREACHABLE() on ARM...
...when the IOMMU is not enabled.
80ff3d338dc9 "iommu: tidy up iommu_use_hap_pt() and need_iommu_pt_sync()
macros" introduced CONFIG_IOMMU_FORCE_PT_SHARE, which causes the global
'iommu_hap_pt_share' option to be replaced with a #define-d value of true.
In this configuration, calling clear_iommu_hap_pt_share() will result
trigger the aforementioned ASSERT.
CONFIG_IOMMU_FORCE_PT_SHARE is always selected for ARM builds and,
because clear_iommu_hap_pt_share() is called by the common iommu_setup()
function if the IOMMU is not enabled, it is no longer safe to disable the
IOMMU on ARM systems.
However, running with the IOMMU disabled is a valid configuration for ARM
systems, so this patch rectifies the problem by removing the call to
clear_iommu_hap_pt_share() from common code. As a side effect of this,
however, it becomes possible on x86 systems for iommu_enabled to be false
but iommu_hap_pt_share to be true. Thus the code in sysctl.c
needs to be changed to make sure that XEN_SYSCTL_PHYSCAP_iommu_hap_pt_share
is not erroneously advertised when the IOMMU has been disabled.
Reported-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> Signed-off-by: Paul Durrant <paul.durrant@citrix.com> Acked-by: Julien Grall <julien.grall@arm.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Andrew Cooper [Wed, 11 Sep 2019 18:42:43 +0000 (19:42 +0100)]
x86/cpuid: Enable CPUID Faulting for PV control domains by default
The domain builder no longer uses local CPUID instructions for policy
decisions. This resolves a key issue for PVH dom0's. However, as PV dom0's
have never had faulting enforced, leave a command line option to restore the
old behaviour.
Advertise virtualised faulting support to control domains unless the opt-out
has been used.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Release-acked-by: Juergen Gross <jgross@suse.com>
Andrew Cooper [Mon, 9 Sep 2019 17:38:35 +0000 (18:38 +0100)]
tools/libxc: Rework xc_cpuid_apply_policy() to use {get,set}_cpu_policy()
The purpose of this change is to stop using xc_cpuid_do_domctl(), and to stop
basing decisions on a local CPUID instruction. This is not a correct or
appropriate way to construct policy information for other domains.
The overwhelming majority of this logic is redundant with the policy logic in
Xen, but has a habit of becoming stale (e.g. c/s 97e4ebdcd76 resulting in the
CPUID.7[1].eax not being offered to guests even when Xen is happy with the
content).
There are a few subtle side effects which need to remain in place. A
successful call to xc_cpuid_apply_policy() must result in a call to
xc_set_domain_cpu_policy() because that is currently the only way the
ITSC/VMX/SVM bits become reflected in the guests CPUID view. Future cleanup
will remove this side effect.
The topology tweaks are local to libxc. Extend struct cpuid_policy with
enough named fields to express the logic, but keep it identical to before.
Fixing topology representation is another future area of work.
No (expected) change in behaviour.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Wei Liu <wl@xen.org> Release-acked-by: Juergen Gross <jgross@suse.com>
Andrew Cooper [Tue, 10 Sep 2019 16:08:13 +0000 (17:08 +0100)]
tools/libxc: Rework xc_cpuid_set() to use {get,set}_cpu_policy()
The purpose of this change is to stop using xc_cpuid_do_domctl(), and to stop
basing decisions on a local CPUID instruction. This is not an appropriate way
to construct policy information for other domains.
Obtain the host and domain-max policies from Xen, and mix the results as
before. Provide rather more error logging than before.
No semantics changes to xc_cpuid_set(). There are conceptual problems with
how the function works, which will be addressed in future toolstack work.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Release-acked-by: Juergen Gross <jgross@suse.com>
Andrew Cooper [Tue, 10 Sep 2019 15:59:20 +0000 (16:59 +0100)]
tools/libxc: Pre-cleanup for xc_cpuid_{set,apply_policy}()
This patch is broken out just to simplify the following two.
For xc_cpuid_set(), document how the 's' and 'k' options works because it is
quite subtle. Replace a memset() with a for loop of 4 explicit NULL
assigments. This mirrors the free()'s in the fail path.
For xc_cpuid_apply_policy(), const-ify the featureset pointer. It isn't
written to, and was never intended to be mutable.
Drop three pieces of trailing whitespace.
No functional change.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Release-acked-by: Juergen Gross <jgross@suse.com>
This hypercall allows the toolstack to present one combined CPUID and MSR
policy for a domain, which can be audited in one go by Xen, which is necessary
for correctness of the auditing.
Reuse the existing set_cpuid XSM access vector, as this is logically the same
operation.
As x86_cpu_policies_are_compatible() is still only a stub, retain the call to
recalculate_cpuid_policy() to discard unsafe toolstack settings.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Release-acked-by: Juergen Gross <jgross@suse.com>
Andrew Cooper [Fri, 6 Sep 2019 14:55:59 +0000 (15:55 +0100)]
x86/cpuid: Split update_domain_cpuid_info() in half
update_domain_cpuid_info() currently serves two purposes. First to merge new
CPUID data from the toolstack, and second, to perform any necessary updating
of derived domain/vcpu settings.
The first part of this is going to be superseded by a new and substantially
more efficient hypercall.
Carve the second part out into a new domain_cpu_policy_changed() helper, and
call this from the remains of update_domain_cpuid_info().
This does drop the call_policy_changed, but with the new hypercall in place,
the common case will be a single call per domain. Dropping the optimisation
here allows for a cleaner set of following changes.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Release-acked-by: Juergen Gross <jgross@suse.com>
This helper will eventually be the core "can a guest configured like this run
on the CPU?" logic. For now, it is just enough of a stub to allow us to
replace the hypercall interface while retaining the previous behaviour.
It will be expanded as various other bits of CPUID handling get cleaned up.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Release-acked-by: Juergen Gross <jgross@suse.com>
Andrew Cooper [Thu, 12 Sep 2019 12:03:44 +0000 (13:03 +0100)]
libx86: Proactively initialise error pointers
This results in better behaviour for the caller.
Suggested-by: Jan Beulich <JBeulich@suse.com> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com> Release-acked-by: Juergen Gross <jgross@suse.com>
Andrew Cooper [Fri, 13 Sep 2019 13:45:40 +0000 (14:45 +0100)]
x86/msr: Offer CPUID Faulting to PVH control domains
The control domain exclusion for CPUID Faulting predates dom0 PVH, but the
reason for the exclusion (to allow the domain builder to see host CPUID
values) isn't applicable.
The domain builder *is* broken in PVH control domains, and restricting the use
of CPUID Faulting doesn't make it any less broken. Tweak the logic to only
exclude PV control domains.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Release-acked-by: Juergen Gross <jgross@suse.com>
There is a case possible, when OP-TEE asks guest to allocate shared
buffer, but Xen for some reason can't translate buffer's addresses. In
this situation we should do two things:
1. Tell guest to free allocated buffer, so there will be no memory
leak for guest.
2. Tell OP-TEE that buffer allocation failed.
To ask guest to free allocated buffer we should perform the same
thing, as OP-TEE does - issue RPC request. This is done by filling
request buffer (luckily we can reuse the same buffer, that OP-TEE used
to issue original request) and then return to guest with special
return code.
Then we need to handle next call from guest in a special way: as RPC
was issued by Xen, not by OP-TEE, it should be handled by Xen.
Basically, this is the mechanism to preempt OP-TEE mediator.
The same mechanism can be used in the future to preempt mediator
during translation large (>512 pages) shared buffers.
Paul Durrant [Wed, 25 Sep 2019 14:14:55 +0000 (16:14 +0200)]
introduce a 'passthrough' configuration option to xl.cfg...
...and hence the ability to disable IOMMU mappings, and control EPT
sharing.
This patch introduces a new 'libxl_passthrough' enumeration into
libxl_domain_create_info. The value will be set by xl either when it parses
a new 'passthrough' option in xl.cfg, or implicitly if there is passthrough
hardware specified for the domain.
If the value of the passthrough configuration option is 'disabled' then
the XEN_DOMCTL_CDF_iommu flag will be clear in the xen_domctl_createdomain
flags, thus allowing the toolstack to control whether the domain gets
IOMMU mappings or not (where previously they were globally set).
If the value of the passthrough configuration option is 'sync_pt' then
a new 'iommu_opts' field in xen_domctl_createdomain will be set with the
value XEN_DOMCTL_IOMMU_no_sharept. This will override the global default
set in iommu_hap_pt_share, thus allowing the toolstack to control whether
EPT sharing is used for the domain.
If the value of passthrough is 'enabled' then xl will choose an appropriate
default according to the type of domain and hardware support.
NOTE: The 'iommu_memkb' overhead in libxl_domain_build_info will now only
be set if passthrough is 'sync_pt' (or xl has chosen this mode as
a default).
Signed-off-by: Paul Durrant <paul.durrant@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Christian Lindig <christian.lindig@citrix.com> Acked-by: Julien Grall <julien.grall@arm.com> Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
Ian Jackson [Wed, 25 Sep 2019 14:14:21 +0000 (16:14 +0200)]
tools/ocaml: abi check: Cope with consecutive relevant enums
If the end of one enum is the `type' line for the next enum, we would
not notice it.
Fix this by reordering the code, and getting rid of the else: now if
the "we are within an enum" branch decides that it's the end of the
enum, it unsets $ei and we then immediately process the line as a "not
within an enum" line - ie as the start of the next one.
Reported-by: Paul Durrant <paul.durrant@citrix.com> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com> Signed-off-by: Paul Durrant <paul.durrant@citrix.com> Acked-by: Christian Lindig <christian.lindig@citrix.com>
Paul Durrant [Wed, 25 Sep 2019 14:12:49 +0000 (16:12 +0200)]
iommu: tidy up iommu_use_hap_pt() and need_iommu_pt_sync() macros
Thes macros really ought to live in the common xen/iommu.h header rather
then being distributed amongst architecture specific iommu headers and
xen/sched.h. This patch moves them there.
NOTE: Disabling 'sharept' in the command line iommu options should really
be hard error on ARM (as opposed to just being ignored), so define
'iommu_hap_pt_share' to be true for ARM (via ARM-selected
CONFIG_IOMMU_FORCE_PT_SHARE).
Signed-off-by: Paul Durrant <paul.durrant@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Julien Grall <julien.grall@arm.com>
Paul Durrant [Wed, 25 Sep 2019 14:12:02 +0000 (16:12 +0200)]
remove late (on-demand) construction of IOMMU page tables
Now that there is a per-domain IOMMU-enable flag, which should be set if
any device is going to be passed through, stop deferring page table
construction until the assignment is done. Also don't tear down the tables
again when the last device is de-assigned; defer that task until domain
destruction.
This allows the has_iommu_pt() helper and iommu_status enumeration to be
removed. Calls to has_iommu_pt() are simply replaced by calls to
is_iommu_enabled(). Remaining open-coded tests of iommu_hap_pt_share can
also be replaced by calls to iommu_use_hap_pt().
The arch_iommu_populate_page_table() and iommu_construct() functions become
redundant, as does the 'strict mode' dom0 page_list mapping code in
iommu_hwdom_init(), and iommu_teardown() can be made static is its only
remaining caller, iommu_domain_destroy(), is within the same source
module.
All in all, about 220 lines of code are removed from the hypervisor (at
the expense of some additions in the toolstack).
NOTE: This patch will cause a small amount of extra resource to be used
to accommodate IOMMU page tables that may never be used, since the
per-domain IOMMU-enable flag is currently set to the value of the
global iommu_enable flag. A subsequent patch will add an option to
the toolstack to allow it to be turned off if there is no intention
to assign passthrough hardware to the domain.
To account for the extra resource, 'iommu_memkb' has been added to
domain_build_info. This patch sets it to a value calculated based
on the domain's maximum memory when the P2M sharing is either not
supported or globally disabled, or zero otherwise. However, when
the toolstack option mentioned above is added, it will also be zero
if the per-domain IOMMU-enable flag is turned off.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com> Reviewed-by: Alexandru Isaila <aisaila@bitdefender.com> Acked-by: Razvan Cojocaru <rcojocaru@bitdefender.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Julien Grall <julien.grall@arm.com> Acked-by: Wei Liu <wl@xen.org>
Jan Beulich [Wed, 25 Sep 2019 14:03:48 +0000 (16:03 +0200)]
AMD/IOMMU: tidy struct ivrs_mappings
Move the device flags field up into an unused hole, thus shrinking
overall structure size by 8 bytes. Use bool and uint<N>_t as
appropriate. Drop pointless (redundant) initializations.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Paul Durrant <paul.durrant@citrix.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
There's no point setting up tables with more space than a PCI device can
use. For both MSI and MSI-X we can determine how many interrupts could
be set up at most. Tables allocated during ACPI table parsing, however,
will (for now at least) continue to be set up to have maximum size.
Note that until we would want to use sub-page allocations here there's
no point checking whether both MSI and MSI-X are supported by a device -
an order-0 allocation will fit the dual case in any event, no matter
that the MSI-X vector count may be smaller than the MSI one.
On my Rome system this reduces space needed from just over 1k pages to
about 125.
Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Paul Durrant <paul.durrant@citrix.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Jan Beulich [Wed, 25 Sep 2019 14:02:21 +0000 (16:02 +0200)]
AMD/IOMMU: replace INTREMAP_ENTRIES
Prepare for the number of entries to not be the maximum possible, by
separating checks against maximum size from ones against actual size.
For caller side simplicity have alloc_intremap_entry() return the
maximum possible value upon allocation failure, rather than the first
just out-of-bounds one.
Have the involved functions already take all the subsequently needed
arguments here already, to reduce code churn in the patch actually
making the allocation size dynamic.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
Jan Beulich [Wed, 25 Sep 2019 14:01:31 +0000 (16:01 +0200)]
x86/PCI: read maximum MSI vector count early
Rather than doing this every time we set up interrupts for a device
anew (and then in several places) fill this invariant field right after
allocating struct pci_dev.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Paul Durrant <paul.durrant@citrix.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Jan Beulich [Wed, 25 Sep 2019 14:00:46 +0000 (16:00 +0200)]
AMD/IOMMU: make phantom functions share interrupt remapping tables
Rather than duplicating entries in amd_iommu_msi_msg_update_ire(), share
the tables. This mainly requires some care while freeing them, to avoid
freeing memory blocks twice.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
ACPI tables are free to list far more device coordinates than there are
actual devices. By delaying the table allocations for most cases, and
doing them only when an actual device is known to be present at a given
position, overall memory used for the tables goes down from over 500k
pages to just over 1k (on my system having such ACPI table contents).
Tables continue to get allocated right away for special entries
(IO-APIC, HPET) as well as for alias IDs. While in the former case
that's simply because there may not be any device at a given position,
in the latter case this is to avoid having to introduce ref-counting of
table usage.
The change involves invoking
iterate_ivrs_mappings(amd_iommu_setup_device_table) a second time,
because the function now wants to be able to find PCI devices, which
isn't possible yet when IOMMU setup happens very early during x2APIC
mode setup. In this context amd_iommu_init_interrupt() gets renamed as
well.
The logic adjusting a DTE's interrupt remapping attributes also gets
changed, such that the lack of an IRT would result in target aborted
rather than non-remapped interrupts (should any occur).
Note that for now phantom functions get separate IRTs allocated, as was
the case before.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Paul Durrant <paul.durrant@citrix.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Jan Beulich [Wed, 25 Sep 2019 13:53:35 +0000 (15:53 +0200)]
ACPI/cpuidle: bump maximum number of power states we support
Commit 4c6cd64519 ("mwait_idle: Skylake Client Support") added a table
with 8 entries, which - together with C0 - rendered the current limit
too low. It should have been accompanied by an increase of the constant;
do this now. Don't bump by too much though, as there are a number of on-
stack arrays which are dimensioned by this constant.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Wei Liu <wl@xen.org> Release-acked-by: Juergen Gross <jgross@suse.com>
sched: fix freeing per-vcpu data in sched_move_domain()
In case of an allocation error of per-vcpu data in sched_move_domain()
the already allocated data is freed just using xfree(). This is wrong
as some schedulers need to do additional operations (e.g. the arinc653
scheduler needs to remove the vcpu-data from a list).
So instead xfree() make use of the sched_free_vdata() hook.
Jan Beulich [Wed, 25 Sep 2019 13:51:52 +0000 (15:51 +0200)]
SVM: correct CPUID event processing
hvm_monitor_cpuid() expects the input registers, not two of the outputs
(it was this way right from its introduction by commit d05f1eb374
["hvm/svm: implement CPUID events"]).
However, once having made the necessary adjustment, the SVM and VMX
functions are so similar that they should be folded (thus avoiding
further similar asymmetries to get introduced). Use the best of both
worlds by e.g. using "curr" consistently. This then being the only
caller of hvm_check_cpuid_faulting(), fold in that function as well.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Acked-by: Razvan Cojocaru <rcojocaru@bitdefender.com> Reviewed-by: Alexandru Isaila <aisaila@bitdefender.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Release-acked-by: Juergen Gross <jgross@suse.com>
Jan Beulich [Wed, 25 Sep 2019 13:50:58 +0000 (15:50 +0200)]
libxc/x86: correct overflow avoidance check in AMD CPUID handling
Commit df29d03f1d ("libxc/x86: avoid certain overflows in CPUID APIC ID
adjustments" introduced a one bit too narrow mask when checking whether
multiplying by 1 (in particular in leaf 1) would result in overflow.
Reported-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Commit f855dd9625 "sched: add minimalistic idle scheduler for free cpus"
introduce the use of ZERO_BLOCK_PTR in the scheduler code. However, the
define does not exist outside of xmalloc_tsf.c for non-x86 architecture.
This will result to a compilation error on Arm:
schedule.c: In function ‘sched_idle_alloc_vdata’:
schedule.c:100:12: error: ‘ZERO_BLOCK_PTR’ undeclared (first use in this function)
return ZERO_BLOCK_PTR;
^~~~~~~~~~~~~~
schedule.c:100:12: note: each undeclared identifier is reported only once for each function it appears in
schedule.c:101:1: error: control reaches end of non-void function [-Werror=return-type]
}
^
cc1: all warnings being treated as errors
To avoid the compilation error, the default definition for
ZERO_BLOCK_PTR is now moved in xen/config.h allowing all the code to use
the define.
Fixes: f855dd9625 ('sched: add minimalistic idle scheduler for free cpus') Signed-off-by: Julien Grall <julien.grall@arm.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
sched: add minimalistic idle scheduler for free cpus
Instead of having a full blown scheduler running for the free cpus
add a very minimalistic scheduler for that purpose only ever scheduling
the related idle vcpu. This has the big advantage of not needing any
per-cpu, per-domain or per-scheduling unit data for free cpus and in
turn simplifying moving cpus to and from cpupools a lot.
Right now, CPUs that are not in any pool, still belong to Pool-0's
scheduler. This forces us to make, within the scheduler, extra effort
to avoid actually running vCPUs on those.
In the case of Credit1, this also cause issue to weights
(re)distribution, as the number of CPUs available to the scheduler is
wrong.
This is described in the changelog of commit e7191920261d ("xen:
credit2: never consider CPUs outside of our cpupool").
This new scheduler will just use a common lock for all free cpus.
As this new scheduler is not user selectable don't register it as an
official scheduler, but just include it in schedule.c.
Today a cpu which is removed from the system is taken directly from
Pool0 to the offline state. This will conflict with the new idle
scheduler, so remove it from Pool0 first. Additionally accept removing
a free cpu instead of requiring it to be in Pool0.
For the resume failed case we need to call the scheduler code for that
situation after the cpupool handling, so move the scheduler code into
a function and call it from cpupool_cpu_remove_forced() and remove the
CPU_RESUME_FAILED case from cpu_schedule_callback().
Note that we are calling now schedule_cpu_switch() in stop_machine
context so we need to switch from spinlock_irq to spinlock_irqsave.
Jan Beulich [Tue, 24 Sep 2019 08:50:33 +0000 (10:50 +0200)]
libxc/x86: avoid certain overflows in CPUID APIC ID adjustments
Recent AMD processors may report up to 128 logical processors in CPUID
leaf 1. Doubling this value produces 0 (which OSes sincerely dislike),
as the respective field is only 8 bits wide. Suppress doubling the value
(and its leaf 0x80000008 counterpart) in such a case.
Note that while there's a similar overflow in intel_xc_cpuid_policy(),
that one is being left alone for now.
Note further that while it was considered to suppress the multiplication
by 2 altogether if the host topology already provides at least one bit
of thread ID within APIC IDs, it was decided to avoid more change here
than really needed at this point.
Also zap leaf 4 (and at the same time leaf 2) EDX output for AMD, as it
should have been from the beginning.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Alexandru Isaila [Tue, 24 Sep 2019 08:49:36 +0000 (10:49 +0200)]
x86/emulate: send vm_event from emulate
A/D bit writes (on page walks) can be considered benign by an introspection
agent, so receiving vm_events for them is a pessimization. We try here to
optimize by filtering these events out.
Currently, we are fully emulating the instruction at RIP when the hardware sees
an EPT fault with npfec.kind != npfec_kind_with_gla. This is, however,
incorrect, because the instruction at RIP might legitimately cause an
EPT fault of its own while accessing a _different_ page from the original one,
where A/D were set.
The solution is to perform the whole emulation, while ignoring EPT restrictions
for the walk part, and taking them into account for the "actual" emulating of
the instruction at RIP. When we send out a vm_event, we don't want the emulation
to complete, since in that case we won't be able to veto whatever it is doing.
That would mean that we can't actually prevent any malicious activity, instead
we'd only be able to report on it.
When we see a "send-vm_event" case while emulating, we need to first send the
event out and then suspend the emulation (return X86EMUL_RETRY).
After the emulation stops we'll call hvm_vm_event_do_resume() again after the
introspection agent treats the event and resumes the guest. There, the
instruction at RIP will be fully emulated (with the EPT ignored) if the
introspection application allows it, and the guest will continue to run past
the instruction.
A common example is if the hardware exits because of an EPT fault caused by a
page walk, p2m_mem_access_check() decides if it is going to send a vm_event.
If the vm_event was sent and it would be treated so it runs the instruction
at RIP, that instruction might also hit a protected page and provoke a vm_event.
Now if npfec.kind == npfec_kind_in_gpt and d->arch.monitor.inguest_pagefault_disabled
is true then we are in the page walk case and we can do this emulation optimization
and emulate the page walk while ignoring the EPT, but don't ignore the EPT for the
emulation of the actual instruction.
In the first case we would have 2 EPT events, in the second case we would have
1 EPT event if the instruction at the RIP triggers an EPT event.
We use hvmemul_map_linear_addr() to intercept write access and
__hvm_copy() to intercept exec, read and write access.
A new return type was added, HVMTRANS_need_retry, in order to have all
the places that consume HVMTRANS* return X86EMUL_RETRY.
hvm_emulate_send_vm_event() can return false if there was no violation,
if there was an error from monitor_traps() or p2m_get_mem_access().
-ESRCH from p2m_get_mem_access() is treated as restricted access.
NOTE: hvm_emulate_send_vm_event() assumes the caller will enable/disable
arch.vm_event->send_event
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com> Acked-by: Paul Durrant <paul@xen.org> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Razvan Cojocaru <rcojocaru@bitdefender.com> Reviewed-by: Petre Pircalabu <ppircalabu@bitdefender.com>
Jan Beulich [Tue, 24 Sep 2019 08:48:44 +0000 (10:48 +0200)]
x86/traps: widen condition for logging top-of-stack
Despite -fno-omit-frame-pointer the compiler may omit the frame pointer,
often for relatively simple leaf functions. (To give a specific example,
the case I've run into this with is _pci_hide_device() and gcc 8.
Interestingly the even more simple neighboring iommu_has_feature() does
get a frame pointer set up, around just a single instruction. But this
may be a result of the size-of-asm() effects discussed elsewhere.)
Log the top-of-stack value if it looks valid _or_ if RIP looks invalid.
Also annotate all stack trace entries with a marker, to indicate their
origin:
R: register state
F: frame pointer based
S: raw stack contents
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Jan Beulich [Tue, 24 Sep 2019 08:47:53 +0000 (10:47 +0200)]
x86/traps: guard top-of-stack reads
Nothing guarantees that the original frame's stack pointer points at
readable memory. Avoid a (likely nested) crash by attaching exception
recovery to the read (making it a single read at the same time). Don't
even invoke _show_trace() in case of a non-readable top slot.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Anthony PERARD [Mon, 23 Sep 2019 13:26:52 +0000 (14:26 +0100)]
libxl: Fix build when LIBXL_API_VERSION is set
The compatibility function mistakenly called itself.
Fixes: 95627b87c3159928458ee586e8c5c593bdd248d8 Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> Acked-by: Wei Liu <wl@xen.org> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
We want to limit number of shared buffers that guest can register in
OP-TEE. Every such buffer consumes XEN resources and we don't want
guest to exhaust XEN. So we choose arbitrary limit for shared buffers.
xen/arm: optee: check for preemption while freeing shared buffers
We can check for hypercall_preempt_check() in the loop inside
optee_relinquish_resources() to increase hypervisor responsiveness in
case if preemption is required.
xen/arm: optee: impose limit on shared buffer size
We want to limit number of calls to lookup_and_pin_guest_ram_addr()
per one request. There are two ways to do this: either preempt
translate_noncontig() or limit size of one shared buffer size.
It is quite hard to preempt translate_noncontig(), because it is deep
nested. So we chose the second option. We will allow 129 pages per one
shared buffer. This corresponds to the GP standard, as it requires
that size limit for shared buffer should be at least 512kB. One extra
page (129th) is needed to cope with the fact that user's buffer is not
necessary aligned with page boundary.
Also, with this limitation OP-TEE still passes own "xtest" test suite,
so this is okay for now.
Anthony PERARD [Fri, 20 Sep 2019 16:19:02 +0000 (17:19 +0100)]
tools/ocaml: Build fix following libxl API changes
The following libxl API became asynchronous and gained an additional
`ao_how' parameter:
libxl_domain_pause()
libxl_domain_unpause()
libxl_send_trigger()
Adapt the ocaml binding.
Build tested only.
Fixes: edaa631ddcee665cdfae1cf6bc7492c791e01ef4 Fixes: 95627b87c3159928458ee586e8c5c593bdd248d8 Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
xen/arm: livepatch: Prevent CPUs to fetch stale instructions after livepatching
During livepatch, a single CPU will take care of applying the patch and
all the others will wait for the action to complete. They will then once
execute arch_livepatch_post_action() to flush the pipeline.
Per B2.2.5 "Concurrent modification and execution of instructions" in
DDI 0487E.a, flushing the instruction cache is not enough to ensure new
instructions are seen. All the PEs should also do an isb() to
synchronize the fetched instruction stream.
xen/arm32: setup: Give a xenheap page to the boot allocator
After commit 6e3e771203 "xen/arm: setup: Relocate the Device-Tree later on
in the boot", the boot allocator will not receive any xenheap page (i.e.
mapped page) on Arm32.
However, the boot allocator implicitly relies on having the first page
already mapped and therefore result to break boot on Arm32.
The easiest way for now is to give a xenheap page to the boot allocator.
We may want to rethink the interface in the future.
[stefano: fix grammar in commit message]
Fixes: 6e3e771203 ('xen/arm: setup: Relocate the Device-Tree later on in the boot') Signed-off-by: Julien Grall <julien.grall@arm.com> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Anthony PERARD [Tue, 13 Aug 2019 14:48:27 +0000 (15:48 +0100)]
libxlu: Handle += in config files
Handle += of both strings and lists.
If += is used for config options expected to be numbers, then a
warning is printed and the config option ignored (because xl ignores
config options with errors).
This is to be used for development purposes, where modifying config
option can be done on the `xl create' command line.
One could have a cmdline= in the cfg file, and specify cmdline+= on
the `xl create` command line without the need to write the whole
cmdline in `xl' command line but simply append to the one in the cfg file.
Or add an extra vif or disk by simply having "vif += [ '', ];" in the
`xl' cmdline.
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Anthony PERARD [Thu, 19 Sep 2019 16:52:24 +0000 (17:52 +0100)]
libxl_pci: Extract common part of *qemu_trad_watch_state_cb
Functions pci_add_qemu_trad_watch_state_cb and
pci_remove_qemu_trad_watch_state_cb are similar so the common part is
extracted in a different function.
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Anthony PERARD [Thu, 30 May 2019 17:08:45 +0000 (18:08 +0100)]
libxl: Use ev_qmp in libxl_set_vcpuonline
Removed libxl__qmp_cpu_add since it's not used anymore.
`cpumap' arg of libxl__set_vcpuonline_xenstore is constified.
The QMP command "query-cpus" is going to be called from different
places, so the algorithm that parse the answer is in a separate
function, qmp_parse_query_cpus.
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Anthony PERARD [Tue, 30 Jul 2019 14:56:30 +0000 (15:56 +0100)]
libxl_pci: Only check if qemu-dm is running in qemu-trad case
QEMU upstream (or qemu-xen) may not have set "running" state in
xenstore. "running" with QEMU doesn't mean that the binary is
running, it means that the emulation have started. When adding a
pci-passthrough device to QEMU, we do so via QMP, we have a direct
answer to whether QEMU is running or not, no need to check ahead.
Moving the check to do it only with qemu-trad makes upcoming changes
simpler.
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Anthony PERARD [Thu, 9 May 2019 17:08:09 +0000 (18:08 +0100)]
libxl_pci: Coding style of do_pci_add
do_pci_add is going to be asynchronous, so we start by having a single
path out of the function. All `return`s instead set rc and goto out.
While here, some use of `rc' was used to store the return value of
libxc calls, change them to store into `r'. Also, add the value of `r'
in the error message of those calls.
There were an `out' label that was use it seems to skip setting up the
IRQ, the label has been renamed to `out_no_irq'.
No functional changes.
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>