xen/arm: boot with device trees with "mmu-masters" and "iommus"
Some Device Trees may expose both legacy SMMU and generic IOMMU bindings
together. However, the SMMU driver in Xen is only supporting the legacy
SMMU bindings, leading to fatal initialization errors at boot time.
This patch fixes the booting problem by adding a check to
iommu_add_dt_device: if the Xen driver doesn't support the new generic
bindings, and the device is behind an IOMMU, do not return error. The
following iommu_assign_dt_device should succeed.
This check will become superfluous, hence removable, once the Xen SMMU
driver gets support for the generic IOMMU bindings.
libxl: don't try to manipulate json config for stubdomain
Stubdomain do not have it's own config file - its configuration is
derived from target domains. Do not try to manipulate it when attaching
PCI device.
This bug prevented starting HVM with stubdomain and PCI passthrough
device attached.
Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com> Acked-by: Wei Liu <wei.liu2@citrix.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com> Release-acked-by: Juergen Gross <jgross@suse.com>
libxl: attach PCI device to qemu only after setting pciback/pcifront
When qemu is running in stubdomain, handling "pci-ins" command will fail
if pcifront is not initialized already. Fix this by sending such command
only after confirming that pciback/front is running.
Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com> Reviewed-by: Anthony PERARD <anthony.perard@citrix.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com> Release-acked-by: Juergen Gross <jgross@suse.com>
libxl: do not attach xen-pciback to HVM domain, if stubdomain is in use
HVM domains use IOMMU and device model assistance for communicating with
PCI devices, xen-pcifront/pciback isn't directly needed by HVM domain.
But pciback serve also second function - it reset the device when it is
deassigned from the guest and for this reason pciback needs to be used
with HVM domain too.
When HVM domain has device model in stubdomain, attaching xen-pciback to
the target domain itself may prevent attaching xen-pciback to the
(PV) stubdomain, effectively breaking PCI passthrough.
Fix this by attaching pciback only to one domain: if PV stubdomain is in
use, let it be stubdomain (the commit prevents attaching device to target
HVM in this case); otherwise, attach it to the target domain.
Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com> Acked-by: Wei Liu <wei.liu2@citrix.com> Release-acked-by: Juergen Gross <jgross@suse.com>
libxl: fix cold plugged PCI device with stubdomain
When libxl__device_pci_add() is called, stubdomain is already running,
even when still constructing the target domain. Previously, do_pci_add()
was called with 'starting' hardcoded to false, but now do_pci_add() shares
'starting' flag in pci_add_state for both target domain and stubdomain.
Fix this by resetting (local) 'starting' to false in pci_add_dm_done()
(previously part of do_pci_add()) when handling stubdomain, regardless
of pas->starting value.
Fixes: 11db56f9a6 (libxl_pci: Use libxl__ao_device with libxl__device_pci_add) Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com> Reviewed-by: Anthony PERARD <anthony.perard@citrix.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com> Release-acked-by: Juergen Gross <jgross@suse.com>
Jan Beulich [Fri, 4 Oct 2019 15:57:03 +0000 (17:57 +0200)]
x86emul: adjust MOVSXD source operand handling
XED commit 1b2fd94425 ("Update MOVSXD to modern behavior") points out
that as of SDM rev 064 MOVSXD is specified to read only 16 bits from
memory (or register) when used without REX.W and with operand size
override. Since the upper 16 bits of the value read won't be used
anyway in this case, make the emulation uniformly follow this more
compatible behavior when not emulating an AMD-like CPU, at the risk
of missing an exception when emulating on/for older hardware (the
boundary at SandyBridge noted in said commit looks questionable - I've
observed the "new" behavior also on Westmere, and a discussion there
lead to Mark finding that even Merom has this behavior already).
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com> Release-acked-by: Juergen Gross <jgross@suse.com>
Anthony PERARD [Mon, 30 Sep 2019 16:39:40 +0000 (17:39 +0100)]
libxl_pci: Fix guest shutdown with PCI PT attached
Before the problematic commit, libxl used to ignore error when
destroying (force == true) a passthrough device. If the DM failed to
detach the pci device within the allowed time, the timed out error
raised skip part of pci_remove_*, but also raise the error up to the
caller of libxl__device_pci_destroy_all, libxl__destroy_domid, and
thus the destruction of the domain fails.
When a *pci_destroy* function is called (so we have force=true), error
should mostly be ignored. If the DM didn't confirmed that the device
is removed, we will print a warning and keep going if force=true.
The patch reorder the functions so that pci_remove_timeout() calls
pci_remove_detatched() like it's done when DM calls are successful.
We also clean the QMP states and associated timeouts earlier, as soon
as they are not needed anymore.
Reported-by: Sander Eikelenboom <linux@eikelenboom.it> Fixes: fae4880c45fe015e567afa223f78bf17a6d98e1b Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com> Tested-by: Sander Eikelenboom <linux@eikelenboom.it> Release-acked-by: Juergen Gross <jgross@suse.com>
Anthony PERARD [Mon, 30 Sep 2019 15:35:52 +0000 (16:35 +0100)]
libxl_pci: Don't ignore PCI PT error at guest creation
Fixes: 11db56f9a6291 Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com> Release-acked-by: Juergen Gross <jgross@suse.com>
Juergen Gross [Wed, 2 Oct 2019 07:27:44 +0000 (09:27 +0200)]
xen/sched: add scheduling granularity enum
Add a scheduling granularity enum ("cpu", "core", "socket") for
specification of the scheduling granularity. Initially it is set to
"cpu", this can be modified by the new boot parameter (x86 only)
"sched-gran".
According to the selected granularity sched_granularity is set after
all cpus are online.
A test is added for all sched resources holding the same number of
cpus. Fall back to core- or cpu-scheduling in that case.
Juergen Gross [Wed, 2 Oct 2019 07:27:43 +0000 (09:27 +0200)]
xen/sched: disable scheduling when entering ACPI deep sleep states
When entering deep sleep states all domains are paused resulting in
all cpus only running idle vcpus. This enables us to stop scheduling
completely in order to avoid synchronization problems with core
scheduling when individual cpus are offlined.
Disabling the scheduler is done by replacing the softirq handler
with a dummy scheduling routine only enabling tasklets to run.
Juergen Gross [Wed, 2 Oct 2019 07:27:42 +0000 (09:27 +0200)]
xen/sched: support core scheduling for moving cpus to/from cpupools
With core scheduling active it is necessary to move multiple cpus at
the same time to or from a cpupool in order to avoid split scheduling
resources in between.
Juergen Gross [Wed, 2 Oct 2019 07:27:41 +0000 (09:27 +0200)]
xen/sched: support differing granularity in schedule_cpu_[add/rm]()
With core scheduling active schedule_cpu_[add/rm]() has to cope with
different scheduling granularity: a cpu not in any cpupool is subject
to granularity 1 (cpu scheduling), while a cpu in a cpupool might be
in a scheduling resource with more than one cpu.
Handle that by having arrays of old/new pdata and vdata and loop over
those where appropriate.
Additionally the scheduling resource(s) must either be merged or
split.
Juergen Gross [Wed, 2 Oct 2019 07:27:39 +0000 (09:27 +0200)]
xen/sched: protect scheduling resource via rcu
In order to be able to move cpus to cpupools with core scheduling
active it is mandatory to merge multiple cpus into one scheduling
resource or to split a scheduling resource with multiple cpus in it
into multiple scheduling resources. This in turn requires to modify
the cpu <-> scheduling resource relation. In order to be able to free
unused resources protect struct sched_resource via RCU. This ensures
there are no users left when freeing such a resource.
On- and offlining cpus with core scheduling is rather complicated as
the cpus are taken on- or offline one by one, but scheduling wants them
rather to be handled per core.
As the future plan is to be able to select scheduling granularity per
cpupool prepare that by storing the granularity in struct
sched_resource (we need it there for free cpus which are not
associated to any cpupool). Free cpus will always use granularity 1.
Store the selected granularity option (cpu, core or socket) in the
cpupool , as we will need it to select the appropriate cpu mask when
populating the cpupool with cpus.
This will make on- and offlining of cpus much easier and avoids
writing code which would needed to be thrown away later.
Move the granularity related variables to cpupool.c as they are now
used form there only.
Juergen Gross [Wed, 2 Oct 2019 14:43:30 +0000 (16:43 +0200)]
xen/sched: make vcpu_wake() and vcpu_sleep() core scheduling aware
vcpu_wake() and vcpu_sleep() need to be made core scheduling aware:
they might need to switch a single vcpu of an already scheduled unit
between running and not running.
Especially when vcpu_sleep() for a vcpu is being called by a vcpu of
the same scheduling unit special care must be taken in order to avoid
a deadlock: the vcpu to be put asleep must be forced through a
context switch without doing so for the calling vcpu. For this
purpose add a vcpu flag handled in sched_slave() and in
sched_wait_rendezvous_in() allowing a vcpu of the currently running
unit to switch state at a higher priority than a normal schedule
event.
Use the same mechanism when waking up a vcpu of a currently active
unit.
While at it make vcpu_sleep_nosync_locked() static as it is used in
schedule.c only.
Juergen Gross [Wed, 2 Oct 2019 07:27:32 +0000 (09:27 +0200)]
xen/sched: add fall back to idle vcpu when scheduling unit
When scheduling an unit with multiple vcpus there is no guarantee all
vcpus are available (e.g. above maxvcpus or vcpu offline). Fall back to
idle vcpu of the current cpu in that case. This requires to store the
correct schedule_unit pointer in the idle vcpu as long as it used as
fallback vcpu.
In order to modify the runstates of the correct vcpus when switching
schedule units merge sched_unit_runstate_change() into
sched_switch_units() and loop over the affected physical cpus instead
of the unit's vcpus. This in turn requires an access function to the
current variable of other cpus.
Today context_saved() is called in case previous and next vcpus differ
when doing a context switch. With an idle vcpu being capable to be a
substitute for an offline vcpu this is problematic when switching to
an idle scheduling unit. An idle previous vcpu leaves us in doubt which
schedule unit was active previously, so save the previous unit pointer
in the per-schedule resource area. If it is NULL the unit has not
changed and we don't have to set the previous unit to be not running.
When running an idle vcpu in a non-idle scheduling unit use a specific
guest idle loop not performing any non-softirq tasklets and
livepatching in order to avoid populating the cpu caches with memory
used by other domains (as far as possible). Softirqs are considered to
be save.
In order to avoid livepatching when going to guest idle another
variant of reset_stack_and_jump() not calling check_for_livepatch_work
is needed.
Juergen Gross [Wed, 2 Oct 2019 07:27:31 +0000 (09:27 +0200)]
xen/sched: add a percpu resource index
Add a percpu variable holding the index of the cpu in the current
sched_resource structure. This index is used to get the correct vcpu
of a sched_unit on a specific cpu.
For now this index will be zero for all cpus, but with core scheduling
it will be possible to have higher values, too.
Juergen Gross [Wed, 2 Oct 2019 07:27:30 +0000 (09:27 +0200)]
xen/sched: support allocating multiple vcpus into one sched unit
With a scheduling granularity greater than 1 multiple vcpus share the
same struct sched_unit. Support that.
Setting the initial processor must be done carefully: we can't use
sched_set_res() as that relies on for_each_sched_unit_vcpu() which in
turn needs the vcpu already as a member of the domain's vcpu linked
list, which isn't the case.
Juergen Gross [Wed, 2 Oct 2019 07:27:29 +0000 (09:27 +0200)]
xen/sched: modify cpupool_domain_cpumask() to be an unit mask
cpupool_domain_cpumask() is used by scheduling to select cpus or to
iterate over cpus. In order to support scheduling units spanning
multiple cpus rename cpupool_domain_cpumask() to
cpupool_domain_master_cpumask() and let it return a cpumask with only
one bit set per scheduling resource.
Juergen Gross [Wed, 2 Oct 2019 07:27:28 +0000 (09:27 +0200)]
xen/sched: add support for multiple vcpus per sched unit where missing
In several places there is support for multiple vcpus per sched unit
missing. Add that missing support (with the exception of initial
allocation) and missing helpers for that.
Juergen Gross [Wed, 2 Oct 2019 07:27:27 +0000 (09:27 +0200)]
xen/sched: introduce unit_runnable_state()
Today the vcpu runstate of a new scheduled vcpu is always set to
"running" even if at that time vcpu_runnable() is already returning
false due to a race (e.g. with pausing the vcpu).
With core scheduling this can no longer work as not all vcpus of a
schedule unit have to be "running" when being scheduled. So the vcpu's
new runstate has to be selected at the same time as the runnability of
the related schedule unit is probed.
For this purpose introduce a new helper unit_runnable_state() which
will save the new runstate of all tested vcpus in a new field of the
vcpu struct.
Juergen Gross [Wed, 2 Oct 2019 07:27:26 +0000 (09:27 +0200)]
xen/sched: add code to sync scheduling of all vcpus of a sched unit
When switching sched units synchronize all vcpus of the new unit to be
scheduled at the same time.
A variable sched_granularity is added which holds the number of vcpus
per schedule unit.
As tasklets require to schedule the idle unit it is required to set the
tasklet_work_scheduled parameter of do_schedule() to true if any cpu
covered by the current schedule() call has any pending tasklet work.
For joining other vcpus of the schedule unit we need to add a new
softirq SCHED_SLAVE_SOFTIRQ in order to have a way to initiate a
context switch without calling the generic schedule() function
selecting the vcpu to switch to, as we already know which vcpu we
want to run. This has the other advantage not to loose any other
concurrent SCHEDULE_SOFTIRQ events.
Julien Grall [Tue, 26 Mar 2019 21:31:16 +0000 (21:31 +0000)]
xen/arm: traps: Mark check_stack_alignment_constraints as unused
Clang will throw an error if a function is unused unless you tell
to ignore it. This can be done using __maybe_unused.
While modifying the declaration, update it to match prototype of similar
functions (see build_assertions). This helps to understand that the sole
purpose of the function is to hold BUILD_BUG_ON().
Julien Grall [Wed, 27 Mar 2019 18:23:11 +0000 (18:23 +0000)]
xen/arm: mm: Mark check_memory_layout_alignment_constraints as unused
Clang will throw an error if a function is unused unless you tell
to ignore it. This can be done using __maybe_unused.
While modifying the declaration, update it to match prototype of similar
functions (see build_assertions). This helps to understand that the sole
purpose of the function is to hold BUILD_BUG_ON().
Julien Grall [Tue, 26 Mar 2019 21:26:57 +0000 (21:26 +0000)]
xen/arm: cpufeature: Match register size with value size in cpus_have_const_cap
Clang is pickier than GCC for the register size in asm statement. It
expects the register size to match the value size.
The asm statement expects a 32-bit (resp. 64-bit) value on Arm32
(resp. Arm64) whereas the value is a boolean (Clang consider to be
32-bit).
It would be possible to impose 32-bit register for both architecture
but this require the code to use __OP32. However, it does no really
improve the assembly generated. Instead, replace switch the variable to
use register_t.
Julien Grall [Tue, 26 Mar 2019 20:53:09 +0000 (20:53 +0000)]
xen/arm: cpuerrata: Match register size with value size in check_workaround_*
Clang is pickier than GCC for the register size in asm statement. It
expects the register size to match the value size.
The asm statement expects a 32-bit (resp. 64-bit) value on Arm32
(resp. Arm64) whereas the value is a boolean (Clang consider to be
32-bit).
It would be possible to impose 32-bit register for both architecture
but this require the code to use __OP32. However, it does not really
improve the assembly generated. Instead, replace switch the variable
to use register_t.
Julien Grall [Tue, 26 Mar 2019 20:30:05 +0000 (20:30 +0000)]
xen/arm64: bitops: Match the register size with the value size in flsl
Clang is pickier than GCC for the register size in asm statement. It expects
the register size to match the value size.
The instruction clz is expecting the two operands to be the same size
(i.e 32-bit or 64-bit). As the flsl function is dealing with 64-bit
value, we need to make the destination variable 64-bit as well.
While at it, add a newline before the return statement.
Note that the return type of flsl is not updated because the result will
always be smaller than 64 and therefore fit in 32-bit.
Julien Grall [Tue, 26 Mar 2019 21:52:20 +0000 (21:52 +0000)]
xen/arm: fix get_cpu_info() when built with clang
Clang understands the GCCism in use here, but still complains that sp is
unitialised. In such cases, resort to the older versions of this code,
which directly read sp into the temporary variable.
Note that GCCism is still kept in default because other compilers (e.g.
clang) may also define __GNUC__, so AFAIK there are no proper way to
detect properly GCC.
This means that in the event Xen is ported to a new compiler, the code
will need to be updated. But that likely not going to be the only place
where Xen will need to be adapted...
Ian Jackson [Wed, 2 Oct 2019 15:55:47 +0000 (16:55 +0100)]
libxl: create: style: Add a pair of missing { ]
From CODING_STYLE:
Every indented statement is braced, but blocks that contain just one
statement may have the braces omitted. To avoid confusion, either all
the blocks in an if...else chain have braces, or none of them do.
CC: Paul Durrant <paul.durrant@citrix.com> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
This is because libxl__domain_create_info_setdefault() currently only sets
an appropriate value for 'passthrough' in the case that 'cap_hvm_directio'
is true, which is not the case unless an IOMMU is present and enabled in
the system. This is normally masked by xl choosing a default value, but
that will not happen if xl is not used (e.g. when using libvirt) or when
a stub domain is being created.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Paul Durrant [Tue, 1 Oct 2019 14:57:13 +0000 (15:57 +0100)]
libxl: replace 'enabled' with 'unknown' in libxl_passthrough enumeration
This is mostly a cosmetic patch to avoid the default enumeration value
being 'enabled'. The only non-cosmetic parts are in xl_parse.c where it now
becomes necessary to explicitly parse the 'enabled' value for xl.cfg
'passthrough' option, and error on the value 'unknown', because there is no
longer a direct mapping between valid xl.cfg values and the enumeration.
Suggested-by: Ian Jackson <ian.jackson@eu.citrix.com> Signed-off-by: Paul Durrant <paul.durrant@citrix.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Roger Pau Monne [Tue, 1 Oct 2019 15:22:33 +0000 (17:22 +0200)]
libxl: wait for the ack when issuing power control requests
Currently only suspend power control requests wait for an ack from the
domain, while power off or reboot requests simply write the command to
xenstore and exit.
Introduce a 1 minute wait for the domain to acknowledge the request, or
else return an error. The suspend code is slightly modified to use the
new infrastructure added, but shouldn't have any functional change.
Fix the ocaml bindings and also provide a backwards compatible
interface for the reboot and poweroff libxl API functions.
Reported-by: Ross Lagerwall <ross.lagerwall@citrix.com> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Anthony PERARD <anthony.perard@citrix.com> Release-acked-by: Juergen Gross <jgross@suse.com> Acked-by: Christian Lindig <christian.lindig@citrix.com>
[ wei: change ret to rc to fix build ] Signed-off-by: Wei Liu <wl@xen.org>
Jan Beulich [Wed, 2 Oct 2019 11:38:02 +0000 (13:38 +0200)]
tools/xen-cpuid: avoid producing bogus output
I was (mistakenly, as - looking at the code - it's clearly not intended
to work) passing the tool "Raw" and "Host" as command line arguments.
Avoid printing just "Raw " with not even a newline at the end in
such a case. Instead report what wasn't understood by the parsing logic.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Wei Liu <wl@xen.org> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com> Release-acked-by: Juergen Gross <jgross@suse.com>
Jan Beulich [Wed, 2 Oct 2019 11:37:43 +0000 (13:37 +0200)]
MAINTAINERS: add tools/misc/xen-cpuid to "X86 ARCHITECTURE"
Along the lines of other x86-specific pieces under tools/.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Wei Liu <wl@xen.org> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com> Release-acked-by: Juergen Gross <jgross@suse.com>
Sergey Dyasli [Wed, 2 Oct 2019 11:35:44 +0000 (13:35 +0200)]
microcode: rendezvous CPUs in NMI handler and load ucode
When one core is loading ucode, handling NMI on sibling threads or
on other cores in the system might be problematic. By rendezvousing
all CPUs in NMI handler, it prevents NMI acceptance during ucode
loading.
Basically, some work previously done in stop_machine context is
moved to NMI handler. Primary threads call in and load ucode in
NMI handler. Secondary threads wait for the completion of ucode
loading on all CPU cores. An option is introduced to disable this
behavior.
Control thread doesn't rendezvous in NMI handler by calling self_nmi()
(in case of unknown_nmi_error() being triggered). The side effect is
control thread might be handling an NMI while other threads are loading
ucode. If an ucode is to update something shared by a whole socket,
control thread may be accessing things that are being updating by the
ucode loading on other cores. It is not safe. Update ucode on the
control thread first to mitigate this issue.
Igor Druzhinin [Tue, 1 Oct 2019 19:15:57 +0000 (20:15 +0100)]
x86/crash: force unlock console before printing on kexec crash
There is a small window where shootdown NMI might come to a CPU
(e.g. in serial interrupt handler) where console lock is taken. In order
not to leave following console prints waiting infinitely for shot down
CPUs to free the lock - force unlock the console.
The race has been frequently observed while crashing nested Xen in
an HVM domain.
Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Release-acked-by: Juergen Gross <jgross@suse.com>
xen/arm: Implement workaround for Cortex A-57 and Cortex A72 AT speculate
Both Cortex-A57 (erratum 1319537) and Cortex-A72 (erratum 1319367) can
end with corrupted TLBs if they speculate an AT instruction while S1/S2
system registers in inconsistent state.
The workaround is the same as for Cortex A-76 implemented by commit a18be06aca "xen/arm: Implement workaround for Cortex-A76 erratum 1165522",
so it is only necessary to plumb in the cpuerrata framework.
Julien Grall [Wed, 21 Aug 2019 21:42:31 +0000 (22:42 +0100)]
xen/arm: domain_build: Don't continue if unable to allocate all dom0 banks
Xen will only print a warning if there are memory unallocated when using
1:1 mapping (only used by dom0). This also includes the case where no
memory has been allocated.
It will bring to all sort of issues that can be hard to diagnostic for
users (the warning can be difficult to spot or disregard).
If the users request 1GB of memory, then most likely they want the exact
amount and not 512MB. So panic if all the memory has not been allocated.
After this change, the behavior is the same as for non-1:1 memory
allocation (used by domU).
At the same time, reflow the message to have the format on a single
line.
Julien Grall [Fri, 9 Aug 2019 12:59:15 +0000 (13:59 +0100)]
xen/arm: p2m: Free the p2m entry after flushing the IOMMU TLBs
When freeing a p2m entry, all the sub-tree behind it will also be freed.
This may include intermediate page-tables or any l3 entry requiring to
drop a reference (e.g for foreign pages). As soon as pages are freed,
they may be re-used by Xen or another domain. Therefore it is necessary
to flush *all* the TLBs beforehand.
While CPU TLBs will be flushed before freeing the pages, this is not
the case for IOMMU TLBs. This can be solved by moving the IOMMU TLBs
flush earlier in the code.
This wasn't considered as a security issue as device passthrough on Arm
is not security supported.
xen/arm: domain_build: Avoid implicit conversion from ULL to UL
Clang 8.0 will fail to build domain_build.c on Arm32 because of the
following error:
domain_build.c:448:21: error: implicit conversion from 'unsigned long long' to 'unsigned long' changes value from 1090921693184 to 0
[-Werror,-Wconstant-conversion]
bank_size = MIN(GUEST_RAM1_SIZE, kinfo->unassigned_mem);
Arm32 is able to support more than 4GB of physical memory, so it would
be theorically possible to create domain with more the 4GB of RAM.
Therefore, the size of a bank may not fit in 32-bit.
This can be resolved by switch the variable bank_size and the parameter
tot_size to "paddr_t".
GAS 2.25.0 throws multiple errors when building arm32/head.S:
arm32/head.S: Assembler messages:
arm32/head.S:452: Error: invalid constant (f7f) after fixup
arm32/head.S:453: Error: invalid constant (f7f) after fixup
arm32/head.S:495: Error: invalid constant (f7f) after fixup
arm32/head.S:510: Error: invalid constant (f7f) after fixup
arm32/head.S:514: Error: invalid constant (f7f) after fixup
arm32/head.S:516: Error: invalid constant (f7f) after fixup
arm32/head.S:633: Error: invalid constant (f7f) after fixup
This makes sense because the instruction mov is only able to deal with a
specific set of immediate (see "modified immediate constants in ARM
instructions"). For any 16-bit immediate, the instruction movw should be
used.
It looks like newer version of GAS will seemly switch to movw if the
immediate does not fit in the immediate encoding for mov. But we should
not rely on this. So switch to movw.
Fixes: 23dfe48d10 ("xen/arm32: head: Introduce macros to create table and mapping entry") Reported-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Julien Grall <julien.grall@arm.com> Tested-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Jan Beulich [Mon, 30 Sep 2019 13:46:24 +0000 (15:46 +0200)]
x86: correct bogus error indicator of cpu_add()
Commit 54ce2db8b8 ("x86/numa: adjust datatypes for node and pxm")
changed this from the -1 (i.e. -EPERM, which was already bogus) that
comes back from setup_node() to NUMA_NO_NODE (0xff). Use a proper error
indicator instead.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com> Release-acked-by: Juergen Gross <jgross@suse.com>
Jan Beulich [Mon, 30 Sep 2019 13:45:16 +0000 (15:45 +0200)]
x86emul: move ARPL #UD check
The #UD for being outside of protected mode gets raised for ARPL only
after having read the memory operand - correct this by moving up the
respective construct.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Release-acked-by: Juergen Gross <jgross@suse.com>
Jan Beulich [Tue, 3 Sep 2019 13:58:08 +0000 (15:58 +0200)]
ns16550: make PCI device hiding uniform
The difference between pci_hide_device() and pci_ro_device() is that
the former only prevents a device from getting assigned to a guest,
while the latter additionally arranges for Dom0 write attempts to the
device's config space to be ignored/discarded. Whether we want one or
the other certainly doesn't depend on whether the device is in our set
of known devices. All that matters is whether we use a PCI device: Call
pci_ro_device() in any such case.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Julien Grall <julien.grall@arm.com>
xen/sched: move struct task_slice into struct sched_unit
In order to prepare for multiple vcpus per schedule unit move struct
task_slice in schedule() from the local stack into struct sched_unit
of the currently running unit. To make access easier for the single
schedulers add the pointer of the currently running unit as a parameter
of do_schedule().
While at it switch the tasklet_work_scheduled parameter of
do_schedule() from bool_t to bool.
As struct task_slice is only ever modified with the local schedule
lock held it is safe to directly set the different units in struct
sched_unit instead of using an on-stack copy for returning the data.
xen/sched: Change vcpu_migrate_*() to operate on schedule unit
vcpu_migrate_start() and vcpu_migrate_finish() are used only to ensure
a vcpu is running on a suitable processor, so they can be switched to
operate on schedule units instead of vcpus.
While doing that rename them accordingly.
Call vcpu_sync_execstate() for each vcpu of the unit when changing
processors in order to make that an explicit action (otherwise this
would happen later when either the vcpu is scheduled on the new
processor or another non-idle vcpu is scheduled on the old processor).
vcpu_move_locked() is switched to schedule unit, too.
xen/sched: add runstate counters to struct sched_unit
Add counters to struct sched_unit summing up runstates of associated
vcpus. This allows doing quick checks whether a unit has any vcpu
running or whether only a single vcpu of a unit is running.
xen/sched: switch schedule() from vcpus to sched_units
Use sched_units instead of vcpus in schedule(). This includes the
introduction of sched_unit_runstate_change() as a replacement of
vcpu_runstate_change() in schedule().
xen/sched: use sched_resource cpu instead smp_processor_id in schedulers
Especially in the do_schedule() functions of the different schedulers
using smp_processor_id() for the local cpu number is correct only if
the sched_unit is a single vcpu. As soon as larger sched_units are
used most uses should be replaced by the master_cpu number of the local
sched_resource instead.
Add a helper to get that sched_resource master_cpu and modify the
schedulers to use it in a correct way.
Today there are two distinct scenarios for vcpu_create(): either for
creation of idle-domain vcpus (vcpuid == processor) or for creation of
"normal" domain vcpus (including dom0), where the caller selects the
initial processor on a round-robin scheme of the allowed processors
(allowed being based on cpupool and affinities).
Instead of passing the initial processor to vcpu_create() and passing
on to sched_init_vcpu() let sched_init_vcpu() do the processor
selection. For supporting dom0 vcpu creation use the node_affinity of
the domain as a base for selecting the processors. User domains will
have initially all nodes set, so this is no different behavior compared
to today. In theory this is not guaranteed as vcpus are created only
with XEN_DOMCTL_max_vcpus being called, but this call is going to be
removed in future and the toolstack doesn't call
XEN_DOMCTL_setnodeaffinity before calling XEN_DOMCTL_max_vcpus.
To be able to use const struct domain * make cpupool_domain_cpumask()
take a const domain pointer, too.
A further simplification is possible by having a single function for
creating the dom0 vcpus with vcpu_id > 0 and doing the required pinning
for all vcpus after that. This allows to make sched_set_affinity()
private to schedule.c and switch it to sched_units easily. Note that
this functionality is x86 only.
xen: add sched_unit_pause_nosync() and sched_unit_unpause()
The credit scheduler calls vcpu_pause_nosync() and vcpu_unpause()
today. Add sched_unit_pause_nosync() and sched_unit_unpause() to
perform the same operations on scheduler units instead.
xen/sched: add is_running indicator to struct sched_unit
Add an is_running indicator to struct sched_unit which will be set
whenever the unit is being scheduled. Switch scheduler code to use
unit->is_running instead of vcpu->is_running for scheduling decisions.
At the same time introduce a state_entry_time field in struct
sched_unit being updated whenever the is_running indicator is changed.
Use that new field in the schedulers instead of the similar vcpu field.
Add the following helpers using a sched_unit as input instead of a
vcpu:
- is_idle_unit() similar to is_idle_vcpu()
- is_unit_online() similar to is_vcpu_online() (returns true when any
of its vcpus is online)
- unit_runnable() like vcpu_runnable() (returns true if any of its
vcpus is runnable)
- sched_set_res() to set the current scheduling resource of a unit
- sched_unit_master() to get the current processor of a unit (returns
the master_cpu of the scheduling resource of a unit)
- sched_{set|clear}_pause_flags[_atomic]() to modify pause_flags of the
associated vcpu(s) (modifies the pause_flags of all vcpus of the
unit)
- sched_idle_unit() to get the sched_unit pointer of the idle vcpu of a
specific physical cpu
xen/sched: move some per-vcpu items to struct sched_unit
Affinities are scheduler specific attributes, they should be per
scheduling unit. So move all affinity related fields in struct vcpu
to struct sched_unit. While at it switch affinity related functions in
sched-if.h to use a pointer to sched_unit instead to vcpu as parameter.
The affinity_broken flag must be kept per vcpu as it is related to
guest actions on specific vcpus. When support of multiple vcpus per
sched_unit is being added, a unit is regarded as being subject to
"broken affinity" when any of its vcpus has the affinity_broken flag
set.
xen/sched: move per cpu scheduler private data into struct sched_resource
This prepares support of larger scheduling granularities, e.g. core
scheduling.
While at it move sched_has_urgent_vcpu() from include/asm-x86/cpuidle.h
into sched.h removing the need for including sched-if.h in cpuidle.h.
For that purpose remobe urgent_count from the scheduler private data
and make it a plain percpu variable.
xen/sched: switch schedule_data.curr to point at sched_unit
In preparation of core scheduling let the percpu pointer
schedule_data.curr point to a strct sched_unit instead of the related
vcpu. At the same time rename the per-vcpu scheduler specific structs
to per-unit ones.
xen/sched: let pick_cpu return a scheduler resource
Instead of returning a physical cpu number let pick_cpu() return a
scheduler resource instead. Rename pick_cpu() to pick_resource() to
reflect that change.
Add a scheduling abstraction layer between physical processors and the
schedulers by introducing a struct sched_resource. Each scheduler unit
running is active on such a scheduler resource. For the time being
there is one struct sched_resource per cpu, but in future there might
be one for each core or socket only.
xen/sched: build a linked list of struct sched_unit
In order to make it easy to iterate over sched_unit elements of a
domain, build a single linked list and add an iterator for it. The new
list is guarded by the same mechanisms as the vcpu linked list as it
is modified only via vcpu_create() or vcpu_destroy().
For completeness add another iterator for_each_sched_unit_vcpu() which
will iterate over all vcpus of a sched_unit (right now only one). This
will be needed later for larger scheduling granularity (e.g. cores).
xen/sched: use new sched_unit instead of vcpu in scheduler interfaces
In order to prepare core- and socket-scheduling use a new struct
sched_unit instead of struct vcpu for interfaces of the different
schedulers.
Rename the per-scheduler functions insert_vcpu and remove_vcpu to
insert_unit and remove_unit to reflect the change of the parameter.
In the schedulers rename local functions switched to sched_unit, too.
Rename alloc_vdata and free_vdata functions to alloc_udata and
free_udata.
For now this new struct will contain a domain, a vcpu pointer and a
unit_id only and is allocated at vcpu creation time.
Simon Gaiser [Fri, 27 Sep 2019 13:04:08 +0000 (15:04 +0200)]
x86: allow stubdom access to irq created for msi
Stubdomains need to be given sufficient privilege over the guest which it
provides emulation for in order for PCI passthrough to work correctly.
When a HVM domain try to enable MSI, QEMU in stubdomain calls
PHYSDEVOP_map_pirq, but later it needs to call XEN_DOMCTL_bind_pt_irq as
part of xc_domain_update_msi_irq. Give the stubdomain enough permissions
over the mapped interrupt in order to bind it successfully to it's
target domain.
This is not needed for PCI INTx, because IRQ in that case is known
beforehand and the stubdomain is given permissions over this IRQ by
libxl__device_pci_add (there's a do_pci_add against the stubdomain).
create_irq() already grant IRQ access to hardware_domain, with
assumption the device model lives there.
Modify create_irq() to take additional parameter, whether to grant
permissions to the domain creating the IRQ, which may be dom0 or a
stubdomain. Do this instead of granting access always to
hardware_domain. Save ID of the domain given permission, to revoke it in
destroy_irq() - easier and cleaner than replaying logic of create_irq()
parameter. Use domid instead of actual reference to the domain,
because it might get destroyed before destroying IRQ (stubdomain is
destroyed before its target domain). And it is not an issue,
because IRQ permissions live within domain structure, so destroying
a domain also implicitly revoke the permission. Potential domid
reuse is detected by checking if that domain does have permission
over the IRQ being destroyed.
Then, adjust all callers to provide the parameter. In case of Xen
internal allocations, set it to false, but for domain accessible
interrupt set it to true.
Inspired by https://github.com/OpenXT/xenclient-oe/blob/5e0e7304a5a3c75ef01240a1e3673665b2aaf05e/recipes-extended/xen/files/stubdomain-msi-irq-access.patch by Eric Chanudet <chanudete@ainfosec.com>.
Signed-off-by: Simon Gaiser <simon@invisiblethingslab.com> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com>
xen/arm: Restrict "p2m_ipa_bits" according to the IOMMU requirements
There is a strict requirement for the IOMMU which wants to share
the P2M table with the CPU. The IOMMU's Stage-2 input size must be equal
to the P2M IPA size. It is not a problem when the IOMMU can support
all values the CPU supports. In that case, the IOMMU driver would just
use any "p2m_ipa_bits" value as is. But, there are cases when not.
In order to make P2M sharing possible on the platforms which
IOMMUs have a limitation in maximum Stage-2 input size introduce
the following logic.
First initialize the IOMMU subsystem and gather requirements regarding
the maximum IPA bits supported by each IOMMU device to figure out
the minimum value among them. In the P2M code, take into the account
the IOMMU requirements and choose suitable "pa_range" according
to the restricted "p2m_ipa_bits".
microcode_update_lock is to prevent logic threads of a same core from
updating microcode at the same time. But due to using a global lock, it
also prevented parallel microcode updating on different cores.
Remove this lock in order to update microcode in parallel. It is safe
because we have already ensured serialization of sibling threads at the
caller side.
1.For late microcode update, do_microcode_update() ensures that only one
sibiling thread of a core can update microcode.
2.For microcode update during system startup or CPU-hotplug,
microcode_mutex() guarantees update serialization of logical threads.
3.get/put_cpu_bitmaps() prevents the concurrency of CPU-hotplug and
late microcode update.
Note that printk in apply_microcode() and svm_host_osvm_init() (for AMD
only) are still processed sequentially.
Signed-off-by: Chao Gao <chao.gao@intel.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
This patch ports microcode improvement patches from linux kernel.
Before you read any further: the early loading method is still the
preferred one and you should always do that. The following patch is
improving the late loading mechanism for long running jobs and cloud use
cases.
Gather all cores and serialize the microcode update on them by doing it
one-by-one to make the late update process as reliable as possible and
avoid potential issues caused by the microcode update.
x86/microcode: reduce memory allocation and copy when creating a patch
To create a microcode patch from a vendor-specific update,
allocate_microcode_patch() copied everything from the update.
It is not efficient. Essentially, we just need to go through
ucodes in the blob, find the one with the newest revision and
install it into the microcode_patch. In the process, buffers
like mc_amd, equiv_cpu_table (on AMD side), and mc (on Intel
side) can be reused. microcode_patch now is allocated after
it is sure that there is a matching ucode.
Signed-off-by: Chao Gao <chao.gao@intel.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
x86/microcode: unify ucode loading during system bootup and resuming
During system bootup and resuming, CPUs just load the cached ucode.
So one unified function microcode_update_one() is introduced. It
takes a boolean to indicate whether ->start_update should be called.
Since early_microcode_update_cpu() is only called on BSP (APs call
the unified function), start_update is always true and so remove
this parameter.
There is a functional change: ->start_update is called on BSP and
->end_update_percpu is called during system resuming. They are not
invoked by previous microcode_resume_cpu().
Signed-off-by: Chao Gao <chao.gao@intel.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
x86/microcode: split out apply_microcode() from cpu_request_microcode()
During late microcode loading, apply_microcode() is invoked in
cpu_request_microcode(). To make late microcode update more reliable,
we want to put the apply_microcode() into stop_machine context. So
we split out it from cpu_request_microcode(). In general, for both
early loading on BSP and late loading, cpu_request_microcode() is
called first to get the matching microcode update contained by
the blob and then apply_microcode() is invoked explicitly on each
cpu in common code.
Given that all CPUs are supposed to have the same signature, parsing
microcode only needs to be done once. So cpu_request_microcode() is
also moved out of microcode_update_cpu().
In some cases (e.g. a broken bios), the system may have multiple
revisions of microcode update. So we would try to load a microcode
update as long as it covers current cpu. And if a cpu loads this patch
successfully, the patch would be stored into the patch cache.
Note that calling ->apply_microcode() itself doesn't require any
lock being held. But the parameter passed to it may be protected
by some locks. E.g. microcode_update_cpu() acquires microcode_mutex
to avoid microcode_cache being updated by others.
Signed-off-by: Chao Gao <chao.gao@intel.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Paul Durrant [Fri, 27 Sep 2019 12:07:42 +0000 (14:07 +0200)]
x86/iommu: fix PVH dom0 settings
PVH dom0 must operate with the iommu settings in 'strict' mode i.e. only the
domain's own pages will be mapped in the IOMMU. The check_hwdom_reqs() is
supposed to ensure this. Unfortunately the test for a PVH dom0 is made
using paging_mode_translate() and, when commit f89f5558 "remove late
(on-demand) construction of IOMMU page tables" moved the call of
check_hwdom_reqs() from iommu_hwdom_init() to iommu_domain_init(), that
test became ineffective (because iommu_domain_init() is called before
paging_enable()).
This patch replaces the test of paging_mode_translate() with a test of
hap_enabled(), and also verifies 'strict' mode is turned on in
arch_iommu_check_autotranslated_hwdom().
Reported-by: Roger Pau Monne <roger.pau@citrix.com> Signed-off-by: Paul Durrant <paul.durrant@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Release-acked-by: Juergen Gross <jgross@suse.com>
Commit 6338c9ead9ff9ef6 ("debugtrace: add per-cpu buffer option") had
a rebase error when using per-cpu buffers: the global buffer address
would always be set to the one of the last per-cpu buffer allocated.
The result would be that when dumping the buffers the last cpu's buffer
is always shown as empty as those entries are printed in the global
buffer's dump already.
Fix that.
Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Jan Beulich <jbeulich@suse.com>
vcpu_force_reschedule() is only used for modifying the periodic timer
of a vcpu. Forcing a vcpu to give up the physical cpu for that purpose
is kind of brutal.
So instead of doing the reschedule dance just operate on the timer
directly. By protecting periodic timer modifications against concurrent
timer activation via a per-vcpu lock it is even no longer required to
bother the target vcpu at all for updating its timer.
Even with the additional lock there is not more serialization involved
compared to the current solution, as today's de-scheduling the vcpu is
requiring to take the schedule lock, which has a much higher contention
probability than the new lock.
Rename the function to vcpu_set_periodic_timer() as this now reflects
the functionality.
The arinc653 scheduler's free_vdata() function is missing proper
locking: as it is modifying the scheduler's private vcpu list it needs
to take the scheduler lock during that operation.
sched: don't let XEN_RUNSTATE_UPDATE leak into vcpu_runstate_get()
vcpu_runstate_get() should never return a state entry time with
XEN_RUNSTATE_UPDATE set. To avoid this let update_runstate_area()
operate on a local runstate copy.
As it is required to first set the XEN_RUNSTATE_UPDATE indicator in
guest memory, then update all the runstate data, and then at last
clear the XEN_RUNSTATE_UPDATE again it is much less effort to have
a local copy of the runstate data instead of keeping only a copy of
state_entry_time.
This problem was introduced with commit 2529c850ea48f036 ("add update
indicator to vcpu_runstate_info").
Reported-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Julien Grall <julien.grall@arm.com>