With group scheduling enabled, if a vcpu of, say, domain A, is already
running on a CPU, the other CPUs of the group can only run vcpus of
that same domain. And in fact, we scan the runqueue and look for one.
But then what can happen is that vcpus of domain A takes turns at
switching between idle/blocked and running, and manage to keep every
other (vcpus of the other) domains out of a group of CPUs for long time,
or even indefinitely (impacting fairness, or causing starvation).
To avoid this, let's limit how deep we go along the runqueue in search
of a vcpu of domain A. That is, if we don't find any that have at least
a certain amount of credits less than what the vcpu at the top of the
runqueue has, give up and keep the CPU idle.
Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
--- Cc: George Dunlap <george.dunlap@citrix.com>
---
TODO:
- for now, CSCHED2_MIN_TIMER is what's used as threshold, but this can
use some tuning (e.g., it probably wants to be adaptive, depending on
how wide the coscheduling group of CPUs is, etc.)
Dario Faggioli [Fri, 24 Aug 2018 14:30:33 +0000 (16:30 +0200)]
xen: sched: Credit2 group-scheduling: tickling
When chosing which CPU should be poked to go pick up a vcpu from the
runqueue, take group-scheduling into account, if it is enabled.
Basically, we avoid tickling CPUs that, even if they are idle, are part
of coscheduling groups where vcpus of other domains (wrt the one waking
up) are already running. Instead, we actively try to tickle the idle
CPUs within the coscheduling groups where vcpus of the same domain are
currently running.
Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
--- Cc: George Dunlap <george.dunlap@citrix.com>
---
TODO:
- deal with sched_smt_power_savings==true;
- optimize the search of appropriate CPUs to be tickled, most likely
using a per-domain data structure. That will spare us having to do
a loop.
Dario Faggioli [Fri, 24 Aug 2018 07:34:25 +0000 (09:34 +0200)]
xen: sched: Credit2 group-scheduling: selecting next vcpu to run
When chosing which vcpu to run next, on a CPU which is in a group where
other vcpus are running already, only consider vcpus of the same domain
(of those vcpus that are running already!).
This is as easy as, in runq_candidate(), while traversing the runqueue,
skipping the vcpus that do not satisfy the group-scheduling constraints.
And now that such constraints are actually enforced, also add an ASSERT()
that checks that we really respect them.
Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
--- Cc: George Dunlap <george.dunlap@citrix.com>
---
TODO:
- Consider better the interactions between group-scheduling and
soft-affinity (in runq_candidate() @3481);
Dario Faggioli [Thu, 23 Aug 2018 15:04:41 +0000 (17:04 +0200)]
xen: sched: Credit2 group-scheduling: data structures
Group scheduling is, for us, when a certain group of CPUs can only
execute the vcpus of one domain, at any given time. What CPUs form the
groups can be defined pretty much arbitrarily, but they're usually build
after the system topology. E.g., core-scheduling is a pretty popular
form of group scheduling, where the CPUs that are SMT sibling threads
within one core are in the same group.
So, basically, core-scheduling means that, if we have one core with two
threads, we will never run dAv0 (i.e., vcpu 0 of domain A) and dBv2, on
these two threads. In fact, we either run dAv0 and dAv3 on them, or, if
there's only one dA's vcpu that can run, then one of the thread stays
idle.
Making Credit2 support core-scheduling is the main aim of this patch
series, but the implementation is general, and allows the user to chose
a different granularity/arrangement of the groups (such as, per-NUMA
node groups).
As per this commit only, just the boot command line parameter (to
enable, disable and configure the feature), the data structures and
the domain tracking logic are implemented.
This means that, until we implement the group scheduling logic, in
later commits, the result of such "what domain is running in this group"
logic (which can be seen via `xl debug-keys r') is not to be considered
correct.
Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
--- Cc: George Dunlap <george.dunlap@citrix.com>
---
TODO:
- document credit2_group_sched in docs/misc/xen-command-line.markdown;
Dario Faggioli [Thu, 11 Oct 2018 10:20:55 +0000 (12:20 +0200)]
xen: sched: Credit2: generalize topology related bootparam handling
Right now, runqueue organization is the only bit of the scheduler that
use such topology related information. But that may not be true forever,
i.e., there may be other boot parameters which takes the same "core",
"socket", etc, strings as argument.
In fact, this is the case of the credit2_group_sched parameter,
introduced in later patches.
Therefore, let's:
- make the #define-s more general, i.e., RUNQUEUE -> TOPOLOGY;
- do the parsing outside of the specific function handling the
credit2_runqueue param.
While there, we also move "node" before "socket", so that we have them
ordered from the smallest to the largest, and we can do math with their
values. (Yes, I know, relationship between node and socket is not always
that clear, but, I've found boxes, like EPYC, with more than one node in
one socket, and I've never found one where two socket are in the same
node, so...)
No functional change intended.
Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
--- Cc: George Dunlap <george.dunlap@citrix.com>
Dario Faggioli [Fri, 5 Oct 2018 14:01:17 +0000 (16:01 +0200)]
xen: sched: Credit2: avoid looping too much (over runqueues) during load balancing
For doing load balancing between runqueues, we check the load of each
runqueue, select the one more "distant" than our own load, and then take
the proper runq lock and attempt vcpu migrations.
If we fail to take such lock, we try again, and the idea was to give up
and bail if, during the checking phase, we can't take the lock of any
runqueue (check the comment near to the 'goto retry;', in the middle of
balance_load())
However, the variable that controls the "give up and bail" part, is not
reset upon retries. Therefore, provided we did manage to check the load of
at least one runqueue during the first pass, if we can't get any runq lock,
we don't bail, but we try again taking the lock of that same runqueue
(and that may even more than once).
Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
--- Cc: George Dunlap <george.dunlap@citrix.com>
xen: sched: Credit2: during scheduling, update the idle mask before using it
Load balancing, when happening, at the end of a "scheduler epoch", can
trigger vcpu migration, which in its turn may call runq_tickle(). If the
cpu where this happens was idle, but we're now going to schedule a vcpu
on it, let's update the runq's idle cpus mask accordingly _before_ doing
load balancing.
Not doing that, in fact, may cause runq_tickle() to think that the cpu
is still idle, and tickle it to go pick up a vcpu from the runqueue,
which might be wrong/unideal.
Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
--- Cc: George Dunlap <george.dunlap@citrix.com>
Roger Pau Monne [Wed, 10 Oct 2018 14:39:35 +0000 (16:39 +0200)]
tools/pvh: set coherent MTRR state for all vCPUs
Instead of just doing it for the BSP. This requires storing the
maximum number of possible vCPUs in xc_dom_image.
This has been a latent bug so far because PVH doesn't yet support
pci-passthrough, so the effective memory cache attribute is forced to
WB by the hypervisor. Note also that even without this in place vCPU#0
is preferred in certain scenarios in order to calculate the memory
cache attributes.
Reported-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Wei Liu <wei.liu2@citrix.com>
Wei Liu [Tue, 9 Oct 2018 14:57:08 +0000 (15:57 +0100)]
x86/vtd: fix IOMMU share PT destruction path
Commit 2916951c1 ("mm / iommu: include need_iommu() test in
iommu_use_hap_pt()") included need_iommu() in iommu_use_hap_pt and 91d4eca7add ("mm / iommu: split need_iommu() into has_iommu_pt() and
need_iommu_pt_sync()") made things finer grain by spliting need_iommu
into three states.
The destruction path can't use iommu_use_hap_pt because at the point
platform op is called, IOMMU is either already switched to or has
always been in disabled state, and the shared PT test would always be
false.
Signed-off-by: Wei Liu <wei.liu2@citrix.com> Reviewed-by: Paul Durrant <paul.durrant@citrix.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Kevin Tian <kevin.tian@intel.com>
George Dunlap [Wed, 10 Oct 2018 11:36:25 +0000 (12:36 +0100)]
libxl: Restore scheduling parameters after migrate in best-effort fashion
Commit 3b4adba ("tools/libxl: include scheduler parameters in the
output of xl list -l") added scheduling parameters to the set of
information collected by libxl_retrieve_domain_configuration(), in
order to report that information in `xl list -l`.
Unfortunately, libxl_retrieve_domain_configuration() is also called by
the migration / save code, and the results passed to the restore /
receive code. This meant scheduler parameters were inadvertently
added to the migration stream, without proper consideration for how to
handle corner cases. The result was that if migrating from a host
running one scheduler to a host running a different scheduler, the
migration would fail with an error like the following:
Luckily there's a fairly straightforward way to set parameters in a
"best-effort" fashion. libxl provides a single struct containing the
parameters of all schedulers, as well as a parameter specifying which
scheduler. Parameters not used by a given scheduler are ignored.
Additionally, the struct contains a parameter to specify the
scheduler. If you specify a specific scheduler,
libxl_domain_sched_params_set() will fail if there's a different
scheduler. However, if you pass LIBXL_SCHEDULER_UNKNOWN, it will use
the value of the current scheduler for that domain.
In domcreate_stream_done(), before calling libxl__build_post(), set
the scheduler to LIBXL_SCHEDULER_UNKNOWN. This will propagate
scheduler parameters from the previous instantiation on a best-effort
basis.
Signed-off-by: George Dunlap <george.dunlap@citrix.com> Acked-by: Ian Jackson <ian.jackson@citrix.com> Acked-by: Wei Liu <wei.liu2@citrix.com>
Jan Beulich [Tue, 9 Oct 2018 14:27:59 +0000 (16:27 +0200)]
x86: put_page_from_l2e() should honor _PAGE_RW
56fff3e5e9 ("x86: nuke PV superpage option and code") has introduced a
(luckily latent only) bug here, in that it didn't make reference
dropping dependent on whether the page was mapped writable. The only
current source of large page mappings for PV domains is the Dom0
builder, which only produces writeable ones.
Take the opportunity and also convert to bool both put_data_page()'s
respective parameter and the argument put_page_from_l3e() passes.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Wei Liu <wei.liu2@citrix.com>
Roger Pau Monné [Tue, 9 Oct 2018 14:27:13 +0000 (16:27 +0200)]
x86/vtd: fix iommu_share_p2m_table
Commit 2916951c1 "mm / iommu: include need_iommu() test in
iommu_use_hap_pt()" changed the check in iommu_share_p2m_table to use
need_iommu(d) (as part of iommu_use_hap_pt) instead of iommu_enabled,
which broke the check because at the point in domain construction
where iommu_share_p2m_table is called need_iommu(d) will always return
false.
Fix this by reverting to the previous logic.
While there turn the hap_enabled check into an ASSERT, since the only
caller of iommu_share_p2m_table already performs the hap_enabled check
before calling the function.
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Paul Durrant <paul.durrant@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Daniel De Graaf [Tue, 9 Oct 2018 14:26:54 +0000 (16:26 +0200)]
flask: sort io{port,mem}con entries
These entries are not always sorted by checkpolicy, so sort them during
policy load (as is already done for later ocontext additions).
Reported-by: Nicolas Poirot <nicolas.poirot@bertin.fr> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> Tested-by: Nicolas Poirot <nicolas.poirot@bertin.fr> Reviewed-by: Nicolas Poirot <nicolas.poirot@bertin.fr>
Jan Beulich [Tue, 9 Oct 2018 14:25:35 +0000 (16:25 +0200)]
x86/HVM: move vendor independent CPU save/restore logic to shared code
A few pieces of the handling here are (no longer?) vendor specific, and
hence there's no point in replicating the code. Zero the full structure
before calling the save hook, eliminating the need for the hook
functions to zero individual fields.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Razvan Cojocaru <rcojocaru@bitdefender.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Yang Qian [Mon, 8 Oct 2018 03:10:14 +0000 (11:10 +0800)]
tools/ocaml: Release the global lock before invoking block syscalls
Functions related with event channel are parallelizable, so release global
lock before invoking C function which will finally call block syscalls.
Signed-off-by: Yang Qian <yang.qian@citrix.com> Acked-by: Christian Lindig <christian.lindig@citrix.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Paul Durrant [Fri, 5 Oct 2018 14:47:10 +0000 (16:47 +0200)]
mm / iommu: split need_iommu() into has_iommu_pt() and need_iommu_pt_sync()
The name 'need_iommu()' is a little confusing as it suggests a domain needs
to use the IOMMU but something might not be set up yet, when in fact it
represents a tri-state value (not a boolean as might be expected) where
-1 means 'IOMMU mappings being set up' and 1 means 'IOMMU mappings have
been fully set up'.
Two different meanings are also inferred from the macro it in various
places in the code:
- Some callers want to test whether a domain has IOMMU mappings at all
- Some callers want to test whether they need to synchronize the domain's
P2M and IOMMU mappings
This patch replaces the 'need_iommu' tri-state value with a defined
enumeration and adds a boolean flag 'need_sync' to separate these meanings,
and places both of these in struct domain_iommu, rather than directly in
struct domain.
This patch also creates two new boolean macros:
- 'has_iommu_pt()' evaluates to true if a domain has IOMMU mappings, even
if they are still under construction.
- 'need_iommu_pt_sync()' evaluates to true if a domain requires explicit
synchronization of the P2M and IOMMU mappings.
All callers of need_iommu() are then modified to use the macro appropriate
to what they are trying to test, except for the instance in
xen/drivers/passthrough/pci.c:assign_device() which has simply been
removed since it appears to be unnecessary.
NOTE: There are some callers of need_iommu() that strictly operate on
the hardware domain. In some of these case a more global flag is
used instead.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com> Acked-by: Razvan Cojocaru <rcojocaru@bitdefender.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: George Dunlap <george.dunlap@citrix.com> Acked-by: Julien Grall <julien.grall@arm.com>
Paul Durrant [Fri, 5 Oct 2018 14:36:56 +0000 (16:36 +0200)]
mm / iommu: include need_iommu() test in iommu_use_hap_pt()
The name 'iommu_use_hap_pt' suggests that that P2M table is in use as the
domain's IOMMU pagetable which, prior to this patch, is not strictly true
since the macro did not test whether the domain actually has IOMMU
mappings.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Acked-by: Julien Grall <julien.grall@arm.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: George Dunlap <george.dunlap@citrix.com>
Paul Durrant [Fri, 5 Oct 2018 14:35:23 +0000 (16:35 +0200)]
vtd: add lookup_page method to iommu_ops
This patch adds a new method to the VT-d IOMMU implementation to find the
MFN currently mapped by the specified DFN along with a wrapper function
in generic IOMMU code to call the implementation if it exists.
NOTE: This patch only adds a Xen-internal interface. This will be used by
a subsequent patch.
Another subsequent patch will add similar functionality for AMD
IOMMUs.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Jan Beulich [Fri, 5 Oct 2018 14:25:43 +0000 (16:25 +0200)]
pass-through: provide two !HVM stubs
Older gcc (4.3 in my case), despite eliminating pci_clean_dpci_irqs()
when !HVM, does not manage to also eliminate pci_clean_dpci_irq(). Cope
with this.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Wei Liu <wei.liu2@citrix.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Jan Beulich [Fri, 5 Oct 2018 14:24:05 +0000 (16:24 +0200)]
x86: use VMLOAD for PV context switch
Having noticed that VMLOAD alone is about as fast as a single of the
involved WRMSRs, I thought it might be a reasonable idea to also use it
for PV. Measurements, however, have shown that an actual improvement can
be achieved only with an early prefetch of the VMCB (thanks to Andrew
for suggesting to try this), which I have to admit I can't really
explain. This way on my Fam15 box context switch takes over 100 clocks
less on average (the measured values are heavily varying in all cases,
though).
This is intentionally not using a new hvm_funcs hook: For one, this is
all about PV, and something similar can hardly be done for VMX.
Furthermore the indirect to direct call patching that is meant to be
applied to most hvm_funcs hooks would be ugly to make work with
functions having more than 6 parameters.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Brian Woods <brian.woods@amd.com> Acked-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Reviewed-by: Wei Liu <wei.liu2@citrix.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
if ( p2m_is_paging(p2mt) )
{
if ( page )
put_page(page);
p2m_mem_paging_populate(d, gfn);
return <-EAGAIN or equivalent>;
}
if ( (q & P2M_UNSHARE) && p2m_is_shared(p2mt) )
{
if ( page )
put_page(page);
return <-EAGAIN or equivalent>;
}
if ( !page )
return <-EINVAL or equivalent>;
There are some small differences between the exact way the occurrences
are coded but the desired semantic is the same.
This patch introduces a new common implementation of this code in
check_get_page_from_gfn() and then converts the various open-coded patterns
into calls to this new function.
NOTE: A forward declaration of p2m_type_t enum has been introduced in
p2m-common.h so that it is possible to declare
check_get_page_from_gfn() there rather than having to add
duplicate declarations in the per-architecture p2m headers.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com> Reviewed-by: Roger Pau Monne <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Julien Grall <julien.grall@arm.com>
Paul Durrant [Fri, 5 Oct 2018 14:21:05 +0000 (16:21 +0200)]
iommu: push use of type-safe DFN and MFN into iommu_ops
This patch modifies the methods in struct iommu_ops to use type-safe DFN
and MFN. This follows on from the prior patch that modified the functions
exported in xen/iommu.h.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com> Reviewed-by: Wei Liu <wei.liu2@citrix.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Roger Pau Monne <roger.pau@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com> Acked-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Acked-by: Julien Grall <julien.grall@arm.com>
Paul Durrant [Fri, 5 Oct 2018 14:16:13 +0000 (16:16 +0200)]
iommu: make use of type-safe DFN and MFN in exported functions
This patch modifies the declaration of the entry points to the IOMMU
sub-system to use dfn_t and mfn_t in place of unsigned long. A subsequent
patch will similarly modify the methods in the iommu_ops structure.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com> Reviewed-by: Wei Liu <wei.liu2@citrix.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Roger Pau Monne <roger.pau@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com> Acked-by: Julien Grall <julien.grall@arm.com> Reviewed-by: George Dunlap <george.dunlap@citrix.com>
Andrew Cooper [Mon, 24 Sep 2018 10:39:46 +0000 (11:39 +0100)]
AMD/IOMMU: Drop get_field_from_byte()
It is MASK_EXTR() in disguise, but less flexible.
No functional change.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Paul Durrant <paul.durrant@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Brian Woods <brian.woods@amd.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Andrew Cooper [Mon, 24 Sep 2018 10:16:21 +0000 (11:16 +0100)]
AMD/IOMMU: Don't opencode memcpy() in queue_iommu_command()
In practice, this allows the compiler to replace the loop with a pair of movs.
No functional change.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Brian Woods <brian.woods@amd.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Wei Liu [Thu, 4 Oct 2018 16:37:56 +0000 (17:37 +0100)]
x86: fix !CONFIG_HVM build for clang 3.8
It is discovered that hvm_funcs made it into monitor.o even when HVM
is disabled. This version of clang doesn't seem to completely
eliminate the code after is_hvm_domain() in
arch_monitor_get_capabilities().
Signed-off-by: Wei Liu <wei.liu2@citrix.com> Acked-by: Razvan Cojocaru <rcojocaru@bitdefender.com> Acked-by: Jan Beulich <jbeulich@suse.com>
Andrew Cooper [Wed, 3 Oct 2018 13:11:20 +0000 (14:11 +0100)]
tools/ocaml: Delete the Xenctrl.with_intf wrapper
This wrapper hides an opening and closing of the xenctrl handle, which amongst
other things opens and closes multiple device files.
A process should create one handle at the start of day and reuse that; indeed
there is no guarentee that the process will retain sufficient permissions to
re-open /dev/xen/privcmd at a later point.
With the final user of Xenctrl.with_intf removed, drop the wrapper entirely.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Christian Lindig <christian.lindig@citrix.com>
Andrew Cooper [Wed, 3 Oct 2018 09:32:54 +0000 (10:32 +0100)]
oxenstored: Don't re-open a xenctrl handle for every domain introduction
Currently, an xc handle is opened in main() which is used for cleanup
activities, and a new xc handle is temporarily opened every time a domain is
introduced. This is inefficient, and amongst other things, requires full root
privileges for the lifetime of oxenstored.
All code using the Xenctrl handle is in domains.ml, so initialise xc as a
global (now happens just before main() is called) and drop it as a parameter
from Domains.create and Domains.cleanup.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Christian Lindig <christian.lindig@citrix.com>
Jan Beulich [Thu, 4 Oct 2018 12:55:01 +0000 (14:55 +0200)]
tools/xen-hvmctx: drop bogus casts from dump_lapic_regs()
The casts weren't even to the right type - all LAPIC registers are
32-bit (pairs/groups of registers may be combined to form larger logical
ones, but this is not visible in the given data representation).
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Wei Liu <wei.liu2@citrix.com>
Jan Beulich [Thu, 4 Oct 2018 12:54:48 +0000 (14:54 +0200)]
tools/xen-hvmctx: drop bogus casts from dump_cpu()
Also avoid printing the MSR flags (they're always zero as of commit 2f1add6e1c "x86/vmx: Don't leak host syscall MSR state into HVM
guests"), and print FPU registers only when the respective flag
indicates the space holds valid data.
Adjust format specifiers a little at the same time, in particular to
avoid at least some leading zeros to be printed when the positions
can't ever be non-zero. This helps readability in my opinion.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Wei Liu <wei.liu2@citrix.com>
Paul Durrant [Thu, 4 Oct 2018 12:50:41 +0000 (14:50 +0200)]
iommu: introduce the concept of DFN...
...meaning 'device DMA frame number' i.e. a frame number mapped in the IOMMU
(rather than the MMU) and hence used for DMA address translation.
This patch is a largely cosmetic change that substitutes the terms 'gfn'
and 'gaddr' for 'dfn' and 'daddr' in all the places where the frame number
or address relate to a device rather than the CPU.
The parts that are not purely cosmetic are:
- the introduction of a type-safe declaration of dfn_t and definition of
INVALID_DFN to make the substitution of gfn_x(INVALID_GFN) mechanical.
- the introduction of __dfn_to_daddr and __daddr_to_dfn (and type-safe
variants without the leading __) with some use of the former.
Subsequent patches will convert code to make use of type-safe DFNs.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Acked-by: Julien Grall <julien.grall@arm.com> Acked-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Jan Beulich [Thu, 4 Oct 2018 12:49:56 +0000 (14:49 +0200)]
x86: fix "xpti=" and "pv-l1tf=" yet again
While commit 2a3b34ec47 ("x86/spec-ctrl: Yet more fixes for xpti=
parsing") indeed fixed "xpti=dom0", it broke "xpti=no-dom0", in that
this then became equivalent to "xpti=no". In particular, the presence
of "xpti=" alone on the command line means nothing as to which default
is to be overridden; "xpti=no-dom0", for example, ought to have no
effect for DomU-s, as this is distinct from both "xpti=no-dom0,domu"
and "xpti=no-dom0,no-domu".
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Juergen Gross [Thu, 4 Oct 2018 11:47:24 +0000 (12:47 +0100)]
xentrace: handle sparse cpu ids correctly in xen trace buffer handling
The per-cpu buffers for Xentrace are addressed by cpu-id, but the info
array for the buffers is sized only by number of online cpus. This
might lead to crashes when using Xentrace with smt=0.
The t_info structure has to be sized based on nr_cpu_ids.
Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: George Dunlap <george.dunlap@citrix.com>
Julien Grall [Mon, 1 Oct 2018 18:57:21 +0000 (19:57 +0100)]
tools/libxl: Switch Arm guest type to PVH
Currently, the toolstack is considering Arm guest always PV. However,
they are very similar to PVH because HW virtualization extension are used
and QEMU is not started. So switch Arm guest type to PVH.
To keep compatibility with toolstack creating Arm guest with PV type
(e.g libvirt), libxl will now convert those guests to PVH.
Furthermore, the default type for Arm in xl will now be PVH to allow
smooth transition for user.
Signed-off-by: Julien Grall <julien.grall@arm.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Wei Liu <wei.liu2@citrix.com>
The PV fields kernel, ramdisk, cmdline are only there for compatibility
with old toolstack. Instead of manually copying them over to there new
field, use the deprecated_by attribute in the IDL.
Suggested-by: Roger Pau Monné <roger.pau@citrix.com> Signed-off-by: Julien Grall <julien.grall@arm.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Wei Liu <wei.liu2@citrix.com>
Julien Grall [Mon, 1 Oct 2018 16:42:27 +0000 (17:42 +0100)]
xen/arm: vgic-v3: Don't create empty re-distributor regions
At the moment, Xen is assuming the hardware domain will have the same
number of re-distributor regions as the host. However, as the
number of CPUs or the stride (e.g on GICv4) may be different we end up
exposing regions which does not contain any re-distributors.
When booting, Linux will go through all the re-distributor region to
check whether a property (e.g vPLIs) is available accross all the
re-distributors. This will result to a data abort on empty regions
because there are no underlying re-distributor.
So we need to limit the number of regions exposed to the hardware
domain. The code reworked to only expose the minimun number of regions
required by the hardware domain. It is assumed the regions will be
populated starting from the first one.
Lastly, rename vgic_v3_rdist_count to reflect the value return by the
helper.
Julien Grall [Mon, 1 Oct 2018 16:42:26 +0000 (17:42 +0100)]
xen/arm: vgic-v3: Delay the initialization of the domain information
A follow-up patch will require to know the number of vCPUs when
initializating the vGICv3 domain structure. However this information is
not available at domain creation. This is only known once
XEN_DOMCTL_max_vpus is called for that domain.
In order to get the max vCPUs around, delay the domain part of the vGIC
v3 initialization until the first vCPU of the domain is initialized.
Roger Pau Monné [Tue, 2 Oct 2018 15:02:33 +0000 (17:02 +0200)]
amd/iommu: remove hidden AMD inclusive mappings
And just rely on arch_iommu_hwdom_init to setup the correct inclusive
mappings as it's done for Intel.
AMD has code in amd_iommu_hwdom_init to setup inclusive mappings up to
max_pdx, remove this since it's now a duplication of
arch_iommu_hwdom_init. Note that AMD mapped every page with a valid
mfn up to max_pdx, arch_iommu_hwdom_init will only do so for memory
below 4GB, so this is a functional change for AMD.
Move the default setting of iommu_hwdom_{inclusive/reserved} to
arch_iommu_hwdom_init since the defaults are now the same for both
Intel and AMD.
Reported-by: Paul Durrant <paul.durrant@citrix.com> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Acked-by: Suravee Suthikulpanit <suravee.suthikupanit@amd.com>
Julien Grall [Mon, 1 Oct 2018 12:46:37 +0000 (13:46 +0100)]
xen/arm: cpufeature: Add helper to check constant caps
Some capababilities are set right during boot and will never change
afterwards. At the moment, the function cpu_have_caps will check whether
the cap is enabled from the memory.
It is possible to avoid the load from the memory by using an
ALTERNATIVE. With that the check is just reduced to 1 instruction.
xen/arm: add SMC wrapper that is compatible with SMCCC v1.0
Existing SMC wrapper call_smc() allows only 4 parameters and
returns only one value. This is enough for existing
use in PSCI code, but TEE mediator will need a call that is
fully compatible with ARM SMCCC v1.0.
This patch adds a wrapper for both arm32 and arm64. In the case of
arm32, the wrapper is just an alias to the ARM SMCCC v1.1 as the
convention is the same.
CC: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
[julien: Rework the wrapper to make it closer to SMCC 1.1 wrapper] Signed-off-by: Julien Grall <julien.grall@arm.com> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
George Dunlap [Mon, 1 Oct 2018 16:14:22 +0000 (17:14 +0100)]
Revert "Make credit2 the default scheduler"
The migration code reads scheduler parameters on the sending side and
tries to set them again on the receiving side, failing if this fails;
the result is that a simple upgrade from 4.11 -> 4.12 will now fail
all migrations. Solving this is not simple; revert the credit2
upgrade until we can sort something out.
Marc Zyngier [Tue, 25 Sep 2018 17:20:38 +0000 (18:20 +0100)]
xen/arm: smccc-1.1: Make return values unsigned long
An unfortunate consequence of having a strong typing for the input
values to the SMC call is that it also affects the type of the
return values, limiting r0 to 32 bits and r{1,2,3} to whatever
was passed as an input.
Let's turn everything into "unsigned long", which satisfies the
requirements of both architectures, and allows for the full
range of return values.
xen/arm: vgic-v3-its: Make vgic_v3_its_free_domain idempotent
vgic_v3_its_free_domain may be called before vgic_v3_its_init_domain if
the vGIC was failing to initalize itself. This means the list would be
unitialized and result in a crash.
Thankfully, we only allow ITS for the hardware domain. So the crash is
not a security issue. Fix it by checking whether the list the NULL.
Wei Liu [Wed, 26 Sep 2018 10:52:54 +0000 (11:52 +0100)]
x86: make sure module array is large enough in pvh-boot.c
The relocation code in __start_xen requires one extra element in the
module array. By the looks of it the temporary array is already large
enough. Panic if that's not the case.
While at it, turn an ASSERT to panic() as well.
Signed-off-by: Wei Liu <wei.liu2@citrix.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Jan Beulich [Fri, 28 Sep 2018 15:13:38 +0000 (17:13 +0200)]
x86: hap_enabled() is HVM-only
There at least two cases where the field so far got accessed for PV
guests as well: One is in iommu_construct(), via iommu_use_hap_pt(),
and the other is
arch_domain_create()
-> paging_domain_init()
-> p2m_init()
-> p2m_init_hostp2m()
-> p2m_init_one()
-> p2m_initialise()
It just so happens that the field currently lives in struct hvm_domain
at an offset larger than sizeof(struct pv_domain).
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Jan Beulich [Fri, 28 Sep 2018 15:12:14 +0000 (17:12 +0200)]
x86: silence false log messages for plain "xpti" / "pv-l1tf"
While commit 2a3b34ec47 ("x86/spec-ctrl: Yet more fixes for xpti=
parsing") claimed to have got rid of the 'parameter "xpti" has invalid
value "", rc=-22!' log message for "xpti" alone on the command line,
this wasn't the case (the option took effect nevertheless).
Fix this there as well as for plain "pv-l1tf".
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Ian Jackson [Tue, 18 Sep 2018 10:25:20 +0000 (11:25 +0100)]
stubdom/grub.patches: Drop docs changes, for licensing reasons
The patch file 00cvs is an import of a new upstream version of
grub1 from upstream CVS.
Unfortunately, in the period covered by the update, upstream changed
the documentation licence from a simple permissive licence, to the GNU
"Free Documentation Licence" with Front and Back Cover Texts.
The Debian Project is of the view that use the Front and Back Cover
Texts feature of the GFDL makes the resulting document not Free
Software, because of the mandatory redistribution of these immutable
texts. (Personally, I agree.)
This is awkward because Debian do not want to ship non-free content.
So the Debian maintainers need to launder the upstream source code, to
remove the troublesome files. This is an extra step when
incorporating new upstream versions. It's particularly annoying for
security response, which often involves rebasing onto a new upstream
release.
grub1 is obsolete and the last change to Xen's PV grub1 stubdom code
was in 2016. Furthermore, the grub1 documentation is not built and
installed by the Xen pv-grub stubdom Makefiles.
Therefore, remove all docs changes from stubdom/grub.patches. This
means that there are now no longer any GFDL-licenced grub docs in
xen.git.
There is no user impact, and Debian is helped. This change would
complicate any attempts to update to a new version of upstream grub1,
but it seems unlikely that such a thing will ever happen.
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com> CC: Doug Goldstein <cardoe@cardoe.com> CC: Juergen Gross <jgross@suse.com> CC: pkg-xen-devel@lists.alioth.debian.org Acked-by: George Dunlap <george.dunlap@citrix.com> Acked-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
Yang Qian [Thu, 27 Sep 2018 07:53:04 +0000 (15:53 +0800)]
tools/ocaml: Add OCaml binding of virq bind
1. Add a common bind virq function
2. Reduce the stub code of `bind_dom_exc_virq`
Signed-off-by: Yang Qian <yang.qian@citrix.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Christian Lindig <christian.lindig@citrix.com>
Daniel Kiper [Thu, 27 Sep 2018 10:05:07 +0000 (12:05 +0200)]
x86/boot: Allocate one extra module slot for Xen image placement
Commit 9589927 (x86/mb2: avoid Xen image when looking for
module/crashkernel position) fixed relocation issues for
Multiboot2 protocol. Unfortunately it missed to allocate
module slot for Xen image placement in early boot path.
So, let's fix it right now.
Reported-by: Wei Liu <wei.liu2@citrix.com> Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Move p2m_{get/set}_suppress_ve() to p2m.c, replace incorrect
ASSERT() in p2m-pt.c (since a guest can run in shadow mode even on
a system with virt exceptions, which would trigger the ASSERT()),
move the VMX-isms (cpu_has_vmx_virt_exceptions checks) to
p2m_ept_{get/set}_entry(), and fix locking code in
p2m_get_suppress_ve().
Signed-off-by: Razvan Cojocaru <rcojocaru@bitdefender.com> Reviewed-by: George Dunlap <george.dunlap@citrix.com>
fuzz, test x86_emulator: disable sse before including always_inline fns
Workaround for compiler rejection of SSE-using always_inlines defined before
SSE is disabled.
Compiling with _FORTIFY_SOURCE or higher levels of optimization enabled
will always_inline several library fns (memset, memcpy, ...)
(with gcc 8.2.0 and glibc 2.28).
In fuzz and x86_emulator test, the compiler is instructed not
to generate SSE instructions via: #pragma GCC target("no-sse")
because those registers are needed for use by the workload.
The combination above causes compilation failure as the inline functions
use those instructions. This is resolved by reordering the inclusion of
<stdio.h> and <string.h> to after the pragma disabling SSE generation.
It would be preferable to locate the no-sse pragma within x86-emulate.h at the
top of the file, prior to including any other headers; unfortunately doing so
before <stdlib.h> causes compilation failure due to declaration of 'atof' with:
"SSE register return with SSE disabled".
Fortunately there is no (known) current dependency on any always_inline
SSE-inclined function declared in <stdlib.h> or any of its dependencies, so the
pragma is therefore issued immediately after inclusion of <stdlib.h> with a
comment introduced to explain its location there.
Add compile-time checks for unwanted prior inclusion of <string.h> and
<stdio.h>, which are the two headers that provide the library functions that
are handled with wrappers and listed within "x86-emulate.h" as ones "we think
might access any of the FPU state".
* Use standard-defined "EOF" macro to detect prior <stdio.h> inclusion.
* Use "_STRING_H" (non-standardized guard macro) as best-effort
for detection of prior <string.h> inclusion. This is non-universally
viable but will provide error output on common GLIBC systems, so
provides some defensive coverage.
Adds conditional #include <stdio.h> to x86-emulate.h because fwrite, printf,
etc. are referenced when WRAP has been defined.
Signed-off-by: Christopher Clark <christopher.clark6@baesystems.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Andrew Cooper [Mon, 17 Sep 2018 14:49:14 +0000 (15:49 +0100)]
xen: Disallow variable length arrays
Variable length arrays result in excess stack utilisation, with a risk
of stack overflow if the length is too large. It also results in fairly
poor asm generation, because of requiring a divide as part of the space
calcuation.
Xen no longer has any variable length arrays, so take the opportunity to
formally disallow them.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Wei Liu <wei.liu2@citrix.com> Acked-by: Julien Grall <julien.grall@arm.com> Acked-by: Jan Beulich <jbeulich@suse.com>
Andrew Cooper [Mon, 17 Sep 2018 15:32:32 +0000 (16:32 +0100)]
x86/hvm: Adjust hvmemul_rep_stos() to compile with -Wvla
When using -Wvla, the typecast of buf triggers a Variable Length Array
warning. This is less than ideal, as this typecast doesn't occupy any stack
space, but we don't have a finer grain option to use.
Alter the asm expression to avoid the typecast, which necessitates the
introduction of a memory clobber as the compiler can no longer identify
the total quantity of written memory.
Despite the memory clobber, there is no change to the generated asm.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Paul Durrant <paul.durrant@citrix.com> Reviewed-by: Wei Liu <wei.liu2@citrix.com>
Andrew Cooper [Mon, 17 Sep 2018 15:30:53 +0000 (16:30 +0100)]
x86/PoD: Avoid using variable length arrays in p2m_pod_zero_check()
Callers of p2m_pod_zero_check() pass a count of up to POD_SWEEP_STRIDE.
Move the definition of POD_SWEEP_STRIDE and give the arrays a fixed
bound.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Wei Liu <wei.liu2@citrix.com> Reviewed-by: George Dunlap <george.dunlap@citrix.com>
Andrew Cooper [Mon, 17 Sep 2018 15:21:53 +0000 (16:21 +0100)]
x86/PoD: Simplify handling of the quick check
There is no need to duplicate the contents of the skip block.
While cleaning up this function, change 4 ints to be unsigned.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Wei Liu <wei.liu2@citrix.com> Reviewed-by: George Dunlap <george.dunlap@citrix.com>