Ian Jackson [Mon, 7 Oct 2019 16:59:15 +0000 (17:59 +0100)]
libxl/xl: Overhaul passthrough setting logic
LIBXL_PASSTHROUGH_UNKNOWN (aka "ENABLED" in an earlier uncommitted
version of this code) is doing double duty. We actually need all of
the following to be specificable:
* default ("unspecified"): enable PT iff we have devices to
pass through specified in the initial config file.
* "enabled" (and fail if the platform doesn't support it).
* "disabled" (and reject future PT hotplug).
* "share_pt"/"sync_pt": enable PT and set a specific PT mode.
Defaulting and error checking should be done in libxl. So, we make
several changes here.
We introduce "enabled". (And we document "unspecified".)
We move all of the error checking and defaulting code from xl into
libxl. Now, libxl__domain_config_setdefault has all of the necessary
information to get this right. So we can do it all there. Choosing
the specific mode is arch-specific.
We can also arrange to have only one place each which calculates
(i) whether passthrough needs to be enabled because pt devices were
specified (ii) whether pt_share can be used (for each arch).
xl now only has to parse the enum in the same way as it parses all
other enums.
This change fixes a regression from earlier 4.13-pre: until recent
changes, passthrough was only enabled by default if passthrough
devices were specified. We restore this behaviour.
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com> CC: Stefano Stabellini <sstabellini@kernel.org> CC: Julien Grall <julien@xen.org> CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com> CC: Andrew Cooper <Andrew.Cooper3@citrix.com> CC: Paul Durrant <pdurrant@gmail.com> CC: Jan Beulich <jbeulich@suse.com>
---
v3: Drop paragraph about masking another osstest regression,
as that's now fixed.
Drop redundant "ERROR:" in two log messages.
Add a comment about the way "enabled" gets changed to a specific value.
Split passthrough mode defaulting into arch specific functions.
On ARM, always choose (and insist on) share_pt.
Reject share_pt for non-HAP guests.
Reject passthrough for PVH guests.
Actually document "unspecified" option in xl.cfg(5)
Rename "unknown" to "unspecified"
Ian Jackson [Fri, 11 Oct 2019 16:16:44 +0000 (17:16 +0100)]
libxl: Move domain_create_info_setdefault earlier
We need this before we start to figure out the passthrough mode.
I have checked that nothing in libxl__domain_create_info_setdefault
nor the two implementations of ..._arch_... accesses anything else,
other than (i) the domain type (which this function is responsible for
setting and nothing before it looks at) (ii) c_info->ssidref (which is
defaulted by flask code near the top of
libxl__domain_config_setdefault and not accessed afterwards).
So no functional change.
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
v3: New patch in this version of the series.
Ian Jackson [Fri, 4 Oct 2019 14:36:59 +0000 (15:36 +0100)]
libxl: Remove/deprecate libxl_get_required_*_memory from the API
These are now redundant because shadow_memkb and iommu_memkb are now
defaulted automatically by libxl_domain_need_memory and
libxl_domain_create etc. Callers should not now call these; instead,
they should just let libxl take care of it.
libxl_get_required_shadow_memory was introduced in f89f555827a6
"remove late (on-demand) construction of IOMMU page tables"
We can freely remove it because it was never in any release.
libxl_get_required_shadow_memory has been in libxl approximately
forever. It should probably not have survived the creation of
libxl_domain_create, but it seems the API awkwardnesses we see in
recent commits prevented this. So we have to keep it. It remains
functional but we can deprecate it. Hopefully we can get rid of it
completely before we find the need to change the calculation to use
additional information which its arguments do not currently supply.
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com> Acked-by: Anthony PERARD <anthony.perard@citrix.com>
Ian Jackson [Fri, 4 Oct 2019 10:45:59 +0000 (11:45 +0100)]
libxl: Move shadow_memkb and iommu_memkb defaulting into libxl
Defaulting is supposed to be done by libxl. So these calculations
should be here in libxl. libxl__domain_config_setdefault has all the
necessary information including the values of max_memkb and max_vcpus.
The overall functional effect depends on the caller:
For xl, no change. The code moves from xl to libxl.
For callers who set one or both shadow_memkb and iommu_memkb (whether
from libxl_get_required_shadow_memory or otherwise) before calling
libxl_domain_need_memory (any version): the new code will leave their
setting(s) unchanged.
For callers who do not call libxl_domain_need_memory at all, and who
fail to set one of these memory values: now they are both are properly
set. The shadow and iommu memory to be properly accounted for as
intended.
For callers which call libxl_domain_need_memory and request the
current API (4.13) or which track libxl, the default values are also
now right and everything works as intended.
For callers which call libxl_domain_need_memory, and request an old
pre-4.13 libxl API, and which leave one of these memkb settings unset,
we take special measures to preserve the old behaviour.
This means that they don't get the additional iommu memory and are at
risk of the domain running out of memory as a result of f89f555827a6
"remove late (on-demand) construction of IOMMU page tables". But this
is no worse than the state just after f89f555827a6, which already
broke such callers in that way. This is perhaps justifiable because
of the API stability warning next to libxl_domain_need_memory.
An alternative would be to drop the special-casing of these callers.
That would cause a discrepancy between libxl_domain_need_memory and
libxl_domain_create: the former would not include the iommu memory and
the latter would. That seems worse, but it's debateable.
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
v2: Replace _Bool with bool
Fix logic sense in ok_to_default_memkb_in_create
Ian Jackson [Thu, 3 Oct 2019 15:58:32 +0000 (16:58 +0100)]
libxl: libxl_domain_need_memory: Make it take a domain_config
This should calculate the extra memory needed for shadow and iommu,
the defaults for which depend on values in c_info. So we need this to
have the complete domain config available.
And the defaults should actually be updated and stored. So make it
non-const.
We provide the usual kind of compatibility function for callers
expecting 4.12 and earlier. This function becomes responsible for the
clone-and-modify of the b_info.
No overall functional change for external libxl callers which use the
API version system to request a particular API version.
Other external libxl callers will need to update their calling code,
and will then find that the new version of this function fills in most
of the defaults in d_config. Because libxl__domain_config_setdefault
doesn't quite do all of the defaults, that's only partial. For
present purposes that doesn't matter because none of the missing
settings are used by the memory calculations. It does mean we need to
document in the API spec that the defaulting is only partial.
This lack of functional change is despite the fact that
numa_place_domain now no longer calls
libxl__domain_build_info_setdefault (via libxl_domain_need_memory).
That is OK because it's idempotent and numa_place_domain's one call
site is libxl__build_pre which is called from libxl__domain_build
which is called from domcreate_bootloader_done, well after the
defaults are set by initiate_domain_create.
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com> Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
---
v2: Drop now-erroneous GC_FREE as well as the corresponding GC_INIT.
Ian Jackson [Thu, 3 Oct 2019 16:31:15 +0000 (17:31 +0100)]
libxl: libxl__domain_config_setdefault: New function
Break out this into a new function. We are going to want to call it
from a new call site.
Unfortunately not all of the defaults can be moved into the new
function without changing the order in which things are done. That
does not seem wise at this stage of the release. The effect is that
additional calls to libxl__domain_config_setdefault (which are going
to be introduced) do not quite set everything. But they will do what
is needed. After Xen 4.13 is done, we should move those settings into
the right order.
No functional change.
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com> Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
---
v2: Add missing error check
Ian Jackson [Fri, 4 Oct 2019 14:30:22 +0000 (15:30 +0100)]
libxl: Offer API versions 0x040700 and 0x040800
According to git log -G:
0x040700 was introduced in 304400459ef0 (aka 4.7.0-rc1~481)
"tools/libxl: rename remus device to checkpoint device"
0x040800 was introduced in 57f8b13c7240 (aka 4.8.0-rc1~437)
"libxl: memory size in kb requires 64 bit variable"
It is surprising that no-one noticed this.
Anyway, in the meantime, we should fix it. Backporting this is
probably a good idea: it won't change the behaviour for existing
callers but it will avoid errors for some older correct uses.
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com> Acked-by: Anthony PERARD <anthony.perard@citrix.com>
---
v2: Adjusted commit message slightly.
make_cpus_node() is using a static buffer to generate the FDT node name.
While mpdir_aff is a 64-bit integer, we only ever use the bits [23:0] as
only AFF{0, 1, 2} are supported for now.
To avoid any potential issues in the future, check that mpdir_aff has
only bits [23:0] set.
Take the opportunity to reduce the size of the buffer. Indeed, only 8
characters are needed to print a 32-bit hexadecimal number. So
sizeof("cpu@") + 8 + 1 (for '\0') = 13 characters is sufficient.
Fixes: c81a791d34 (xen/arm: Set 'reg' of cpu node for dom0 to match MPIDR's affinity) Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com> Reviewed-by: Julien Grall <julien.grall@arm.com> Release-acked-by: Juergen Gross <jgross@suse.com>
Paul Durrant [Thu, 10 Oct 2019 15:45:15 +0000 (17:45 +0200)]
x86/mm: don't needlessly veto migration
Now that xl.cfg has an option to explicitly enable IOMMU mappings for a
domain, migration may be needlessly vetoed due to the check of
is_iommu_enabled() in paging_log_dirty_enable().
There is actually no need to prevent logdirty from being enabled unless
devices are assigned to a domain.
NOTE: While in the neighbourhood, the bool_t parameter type in
paging_log_dirty_enable() is replaced with a bool and the format
of the comment in assign_device() is fixed.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: George Dunlap <george.dunlap@citrix.com> Release-acked-by: Juergen Gross <jgross@suse.com>
Igor Druzhinin [Thu, 10 Oct 2019 14:50:50 +0000 (16:50 +0200)]
x86/efi: properly handle 0 in pixel reserved bitmask
In some graphics modes firmware is allowed to return 0 in pixel reserved
bitmask which doesn't go against UEFI Spec 2.8 (12.9 Graphics Output Protocol).
Without this change non-TrueColor modes won't work which will cause
GOP init to fail - observed while trying to boot EFI Xen with Cirrus VGA.
Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Release-acked-by: Juergen Gross <jgross@suse.com>
Andrew Cooper [Wed, 9 Oct 2019 18:21:14 +0000 (19:21 +0100)]
x86/hvm: Fix the use of "hap=0" following c/s c0902a9a143a
c/s c0902a9a143a refactored hvm_enable() a little, but dropped the logic which
cleared hap_supported in the case that the user had asked for it off.
This results in Xen booting up, claiming:
(XEN) HVM: Hardware Assisted Paging (HAP) detected but disabled
but with HAP advertised via sysctl, and XEN_DOMCTL_CDF_hap being accepted in
domain_create().
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Paul Durrant <paul@xen.org> Acked-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Wei Liu <wl@xen.org> Release-acked-by: Juergen Gross <jgross@suse.com>
Roger Pau Monné [Thu, 10 Oct 2019 08:59:27 +0000 (10:59 +0200)]
pci: clear {host/guest}_maskall field on assign
The current implementation of host_maskall makes it sticky across
assign and deassign calls, which means that once a guest forces Xen to
set host_maskall the maskall bit is not going to be cleared until a
call to PHYSDEVOP_prepare_msix is performed. Such call however
shouldn't be part of the normal flow when doing PCI passthrough, and
hence the flag needs to be cleared when assigning in order to prevent
host_maskall being carried over from previous assignations.
Note that the entry maskbit is reset when the msix capability is
initialized, and the guest_maskall field is also cleared so that the
hardware value matches Xen's internal state (hardware maskall =
host_maskall | guest_maskall).
Also note that doing the reset of host_maskall there would allow the
guest to reset such field by enabling and disabling MSIX, which is not
intended.
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Tested-by: Chao Gao <chao.gao@intel.com> Release-acked-by: Juergen Gross <jgross@suse.com>
Igor Druzhinin [Thu, 10 Oct 2019 08:58:45 +0000 (10:58 +0200)]
efi/boot: make sure graphics mode is set while booting through MB2
If a bootloader is using native driver instead of EFI GOP it might
reset graphics mode to be different from what has been originally set
by firmware. While booting through MB2 Xen either need to parse video
setting passed by MB2 and use them instead of what GOP reports or
reset the mode to synchronise it with firmware - prefer the latter.
Observed while booting Xen using MB2 with EFI GRUB2 compiled with
all possible video drivers where native drivers take priority over firmware.
Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Release-acked-by: Juergen Gross <jgross@suse.com>
Jan Beulich [Thu, 10 Oct 2019 07:51:46 +0000 (09:51 +0200)]
AMD/IOMMU: pre-fill all DTEs right after table allocation
Make sure we don't leave any DTEs unexpected requests through which
would be passed through untranslated. Set V and IV right away (with
all other fields left as zero), relying on the V and/or IV bits
getting cleared only by amd_iommu_set_root_page_table() and
amd_iommu_set_intremap_table() under special pass-through circumstances.
Switch back to initial settings in amd_iommu_disable_domain_device().
Take the liberty and also make the latter function static, constifying
its first parameter at the same time, at this occasion.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Paul Durrant <paul.durrant@citrix.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Release-acked-by: Juergen Gross <jgross@suse.com>
Jan Beulich [Thu, 10 Oct 2019 07:51:12 +0000 (09:51 +0200)]
AMD/IOMMU: allow callers to request allocate_buffer() to skip its memset()
The command ring buffer doesn't need clearing up front in any event.
Subsequently we'll also want to avoid clearing the device tables.
While playing with functions signatures replace undue use of fixed width
types at the same time, and extend this to deallocate_buffer() as well.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Release-acked-by: Juergen Gross <jgross@suse.com>
Jan Beulich [Thu, 10 Oct 2019 07:50:00 +0000 (09:50 +0200)]
AMD/IOMMU: allocate one device table per PCI segment
Having a single device table for all segments can't possibly be right.
(Even worse, the symbol wasn't static despite being used in just one
source file.) Attach the device tables to their respective IVRS mapping
ones.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Paul Durrant <paul.durrant@citrix.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com> Release-acked-by: Juergen Gross <jgross@suse.com>
When reserved-memory regions are present in the host device tree, dom0
is started with multiple memory nodes. Each memory node should have a
unique name, but today they are all called "memory" leading to Linux
printing the following warning at boot:
OF: Duplicate name in base, renamed to "memory#1"
This patch fixes the problem by appending a "@<unit-address>" to the
name, as per the Device Tree specification, where <unit-address> matches
the base of address of the first region.
Fixes: 248faa637d2 (xen/arm: add reserved-memory regions to the dom0 memory node) Reported-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com> Acked-by: Julien Grall <julien.grall@arm.com> Release-acked-by: Juergen Gross <jgross@suse.com>
Lars Kurth [Thu, 3 Oct 2019 15:47:05 +0000 (08:47 -0700)]
docs: update all URLs in man pages
Specifically
* xen.org to xenproject.org
* http to https
* Replaced pages where page has moved
(including on xen pages as well as external pages)
* Removed some URLs (e.g. downloads for Linux PV drivers)
Tested-by: Lars Kurth <lars.kurth@citrix.com> Signed-off-by: Lars Kurth <lars.kurth@citrix.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Wei Liu <wl@xen.org> Release-acked-by: Juergen Gross <jgross@suse.com>
Juergen Gross [Mon, 7 Oct 2019 06:35:19 +0000 (08:35 +0200)]
xen/sched: let credit scheduler control its timer all alone
The credit scheduler is the only scheduler with tick_suspend and
tick_resume callbacks. Today those callbacks are invoked without being
guarded by the scheduler lock which is critical when at the same the
cpu those callbacks are active is being moved to or from a cpupool.
Crashes like the following are possible due to that race:
The callbacks are used when the cpu is going to or coming from idle in
order to allow higher C-states.
The credit scheduler knows when it is going to schedule an idle
scheduling unit or another one after idle, so it can easily stop or
resume the timer itself removing the need to do so via the callback.
As this timer handling is done in the main scheduling function the
scheduler lock is still held, so the race with cpupool operations can
no longer occur. Note that calling the callbacks from schedule_cpu_rm()
and schedule_cpu_add() is no longer needed, as the transitions to and
from idle in the cpupool with credit active will automatically occur
and do the right thing.
With the last user of the callbacks gone those can be removed.
Julien Grall [Fri, 4 Oct 2019 16:32:49 +0000 (17:32 +0100)]
xen/xsm: flask: Prevent NULL deference in flask_assign_{, dt}device()
flask_assign_{, dt}device() may be used to check whether you can test if
a device is assigned. In this case, the domain will be NULL.
However, flask_iommu_resource_use_perm() will be called and may end up
to deference a NULL pointer. This can be prevented by moving the call
after we check the validity for the domain pointer.
Coverity-ID: 1486741 Fixes: 71e617a6b8 ('use is_iommu_enabled() where appropriate...') Signed-off-by: Julien Grall <julien.grall@arm.com> Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> Reviewed-by: Paul Durrant <paul@xen.org> Release-acked-by: Juergen Gross <jgross@suse.com>
Andrew Cooper [Tue, 1 Oct 2019 16:27:49 +0000 (17:27 +0100)]
x86/Kconfig: Invert the defaults for CONFIG_{PVH_GUEST,PV_SHIM}
This is a minor UI change, but users which have elected to enable
XEN_GUEST (which still defaults to no) will definitely need one of these
options, and will typically want both.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Wei Liu <wl@xen.org> Acked-by: Jan Beulich <jbeulich@suse.com> Release-acked-by: Juergen Gross <jgross@suse.com>
There are legitimate circumstance where array hardening is not wanted or
needed. Allow it to be turned off.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Release-acked-by: Juergen Gross <jgross@suse.com>
Andrew Cooper [Thu, 3 Oct 2019 14:04:03 +0000 (15:04 +0100)]
x86/spec-ctrl: Annotate remaining model names
The names in retpoline_safe() are copied from should_use_eager_fpu(). The
names in mds_calculations() come partly from Linux's intel-family.h, and
partly from conversations with Intel.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Jan Beulich <jbeulich@suse.com> Release-acked-by: Juergen Gross <jgross@suse.com>
We don't have a clear way to know how many virtual SPIs we need for the
dom0-less domains. Introduce a new option under xen,domain to specify
the number of SPIs to allocate for a domain.
The property is optional. When absent, we'll use the physical number of
GIC lines for dom0-less domains, or GUEST_VPL011_SPI+1 if vpl011 is
requested, whichever is greater.
Remove the old setting of nr_spis based on the presence of the vpl011.
The implication of this change is that without nr_spis dom0less domains
get the same amount of SPI allocated as dom0, regardless of how many
physical devices they have assigned, and regardless of whether they have
a virtual pl011 (which also needs an emulated SPI). This is done because
the SPIs allocation needs to be done before parsing any passthrough
information, so we have to account for any potential physical SPI
assigned to the domain.
When nr_spis is present, the domain gets exactly nr_spis allocated SPIs.
If the number is too low, it might not be enough for the devices
assigned it to it. If the number is less than GUEST_VPL011_SPI, the
virtual pl011 won't work.
Detect "multiboot,device-tree" compatible nodes. Add them to the bootmod
array as BOOTMOD_GUEST_DTB. In kernel_probe, find the right
BOOTMOD_GUEST_DTB and store a pointer to it in dtb_bootmodule.
Scan the user provided dtb fragment at boot. For each device node, map
memory to guests, and route interrupts and setup the iommu.
The memory region to remap is specified by the "xen,reg" property.
The iommu is setup by passing the node of the device to assign on the
host device tree. The path is specified in the device tree fragment as
the "xen,path" string property.
The interrupts are remapped based on the information from the
corresponding node on the host device tree. Call
handle_device_interrupts to remap interrupts. Interrupts related device
tree properties are copied from the device tree fragment, same as all
the other properties.
Require both xen,reg and xen,path to be present, unless
xen,force-assign-without-iommu is also set. In that case, tolerate a
missing xen,path, also tolerate iommu setup failure for the passthrough
device.
Also set add the new flag XEN_DOMCTL_CDF_iommu so that dom0less domU
can use the IOMMU if a partial dtb is specified.
Read the dtb fragment corresponding to a passthrough device from memory
at the location referred to by the "multiboot,device-tree" compatible
node.
Add a new field named dtb_bootmodule to struct kernel_info to keep track
of the dtb fragment location.
Copy the fragment to the guest dtb (only /aliases and /passthrough).
Set kinfo->phandle_gic based on the phandle of the special "/gic"
node in the device tree fragment. "/gic" is a dummy node in the dtb
fragment that represents the gic interrupt controller. Other properties
in the dtb fragment might refer to it (for instance interrupt-parent of
a device node). We reuse the phandle of "/gic" from the dtb fragment as
the phandle of the full GIC node that will be created for the guest
device tree. That way, when we copy properties from the device tree
fragment to the domU device tree the links remain unbroken.
scan_passthrough_prop is introduced here and not used in this patch but
it will be used by later patches.
Some of the code below is taken from tools/libxl/libxl_arm.c. Note that
it is OK to take LGPL 2.1 code and including it into a GPLv2 code base.
The result is GPLv2 code.
Changes in v5:
- code style
- in-code comment
- remove depth parameter from scan_pfdt_node
- for instead of loop in domain_handle_dtb_bootmodule
- move "gic" check to domain_handle_dtb_bootmodule
- add check_partial_fdt
- use DT_ROOT_NODE_ADDR/SIZE_CELLS_DEFAULT
- add scan_passthrough_prop parameter, set it to false for "/aliases"
Changes in v4:
- use recursion in the implementation
- rename handle_properties to handle_prop_pfdt
- rename scan_pt_node to scan_pfdt_node
- pass kinfo to handle_properties
- use uint32_t instead of u32
- rename r to res
- add "passthrough" and "aliases" check
- add a name == NULL check
- code style
- move DTB fragment scanning earlier, before DomU GIC node creation
- set guest_phandle_gic based on "/gic"
- in-code comment
Changes in v3:
- switch to using device_tree_for_each_node for the copy
Changes in v2:
- add a note about the code coming from libxl in the commit message
- copy /aliases
- code style
Instead of always hard-coding the GIC phandle (GUEST_PHANDLE_GIC), store
it in a variable under kinfo. This way it can be dynamically chosen per
domain. Remove the fdt pointer argument to the make_*_domU_node
functions and oass a struct kernel_info * instead. The fdt pointer can
be accessed from kinfo->fdt. Remove the struct domain *d parameter to
the make_*_domU_node functions because it becomes unused.
Initialize phandle_gic to GUEST_PHANDLE_GIC at the beginning of
prepare_dtb_domU for DomUs. Later patches will change the value of
phandle_gic depending on user provided information.
For Dom0, initialize phandle_gic to dt_interrupt_controller->phandle
(current value) at the beginning of prepare_dtb.
xen/arm: boot with device trees with "mmu-masters" and "iommus"
Some Device Trees may expose both legacy SMMU and generic IOMMU bindings
together. However, the SMMU driver in Xen is only supporting the legacy
SMMU bindings, leading to fatal initialization errors at boot time.
This patch fixes the booting problem by adding a check to
iommu_add_dt_device: if the Xen driver doesn't support the new generic
bindings, and the device is behind an IOMMU, do not return error. The
following iommu_assign_dt_device should succeed.
This check will become superfluous, hence removable, once the Xen SMMU
driver gets support for the generic IOMMU bindings.
libxl: don't try to manipulate json config for stubdomain
Stubdomain do not have it's own config file - its configuration is
derived from target domains. Do not try to manipulate it when attaching
PCI device.
This bug prevented starting HVM with stubdomain and PCI passthrough
device attached.
Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com> Acked-by: Wei Liu <wei.liu2@citrix.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com> Release-acked-by: Juergen Gross <jgross@suse.com>
libxl: attach PCI device to qemu only after setting pciback/pcifront
When qemu is running in stubdomain, handling "pci-ins" command will fail
if pcifront is not initialized already. Fix this by sending such command
only after confirming that pciback/front is running.
Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com> Reviewed-by: Anthony PERARD <anthony.perard@citrix.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com> Release-acked-by: Juergen Gross <jgross@suse.com>
libxl: do not attach xen-pciback to HVM domain, if stubdomain is in use
HVM domains use IOMMU and device model assistance for communicating with
PCI devices, xen-pcifront/pciback isn't directly needed by HVM domain.
But pciback serve also second function - it reset the device when it is
deassigned from the guest and for this reason pciback needs to be used
with HVM domain too.
When HVM domain has device model in stubdomain, attaching xen-pciback to
the target domain itself may prevent attaching xen-pciback to the
(PV) stubdomain, effectively breaking PCI passthrough.
Fix this by attaching pciback only to one domain: if PV stubdomain is in
use, let it be stubdomain (the commit prevents attaching device to target
HVM in this case); otherwise, attach it to the target domain.
Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com> Acked-by: Wei Liu <wei.liu2@citrix.com> Release-acked-by: Juergen Gross <jgross@suse.com>
libxl: fix cold plugged PCI device with stubdomain
When libxl__device_pci_add() is called, stubdomain is already running,
even when still constructing the target domain. Previously, do_pci_add()
was called with 'starting' hardcoded to false, but now do_pci_add() shares
'starting' flag in pci_add_state for both target domain and stubdomain.
Fix this by resetting (local) 'starting' to false in pci_add_dm_done()
(previously part of do_pci_add()) when handling stubdomain, regardless
of pas->starting value.
Fixes: 11db56f9a6 (libxl_pci: Use libxl__ao_device with libxl__device_pci_add) Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com> Reviewed-by: Anthony PERARD <anthony.perard@citrix.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com> Release-acked-by: Juergen Gross <jgross@suse.com>
Jan Beulich [Fri, 4 Oct 2019 15:57:03 +0000 (17:57 +0200)]
x86emul: adjust MOVSXD source operand handling
XED commit 1b2fd94425 ("Update MOVSXD to modern behavior") points out
that as of SDM rev 064 MOVSXD is specified to read only 16 bits from
memory (or register) when used without REX.W and with operand size
override. Since the upper 16 bits of the value read won't be used
anyway in this case, make the emulation uniformly follow this more
compatible behavior when not emulating an AMD-like CPU, at the risk
of missing an exception when emulating on/for older hardware (the
boundary at SandyBridge noted in said commit looks questionable - I've
observed the "new" behavior also on Westmere, and a discussion there
lead to Mark finding that even Merom has this behavior already).
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com> Release-acked-by: Juergen Gross <jgross@suse.com>
Anthony PERARD [Mon, 30 Sep 2019 16:39:40 +0000 (17:39 +0100)]
libxl_pci: Fix guest shutdown with PCI PT attached
Before the problematic commit, libxl used to ignore error when
destroying (force == true) a passthrough device. If the DM failed to
detach the pci device within the allowed time, the timed out error
raised skip part of pci_remove_*, but also raise the error up to the
caller of libxl__device_pci_destroy_all, libxl__destroy_domid, and
thus the destruction of the domain fails.
When a *pci_destroy* function is called (so we have force=true), error
should mostly be ignored. If the DM didn't confirmed that the device
is removed, we will print a warning and keep going if force=true.
The patch reorder the functions so that pci_remove_timeout() calls
pci_remove_detatched() like it's done when DM calls are successful.
We also clean the QMP states and associated timeouts earlier, as soon
as they are not needed anymore.
Reported-by: Sander Eikelenboom <linux@eikelenboom.it> Fixes: fae4880c45fe015e567afa223f78bf17a6d98e1b Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com> Tested-by: Sander Eikelenboom <linux@eikelenboom.it> Release-acked-by: Juergen Gross <jgross@suse.com>
Anthony PERARD [Mon, 30 Sep 2019 15:35:52 +0000 (16:35 +0100)]
libxl_pci: Don't ignore PCI PT error at guest creation
Fixes: 11db56f9a6291 Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com> Release-acked-by: Juergen Gross <jgross@suse.com>
Juergen Gross [Wed, 2 Oct 2019 07:27:44 +0000 (09:27 +0200)]
xen/sched: add scheduling granularity enum
Add a scheduling granularity enum ("cpu", "core", "socket") for
specification of the scheduling granularity. Initially it is set to
"cpu", this can be modified by the new boot parameter (x86 only)
"sched-gran".
According to the selected granularity sched_granularity is set after
all cpus are online.
A test is added for all sched resources holding the same number of
cpus. Fall back to core- or cpu-scheduling in that case.
Juergen Gross [Wed, 2 Oct 2019 07:27:43 +0000 (09:27 +0200)]
xen/sched: disable scheduling when entering ACPI deep sleep states
When entering deep sleep states all domains are paused resulting in
all cpus only running idle vcpus. This enables us to stop scheduling
completely in order to avoid synchronization problems with core
scheduling when individual cpus are offlined.
Disabling the scheduler is done by replacing the softirq handler
with a dummy scheduling routine only enabling tasklets to run.
Juergen Gross [Wed, 2 Oct 2019 07:27:42 +0000 (09:27 +0200)]
xen/sched: support core scheduling for moving cpus to/from cpupools
With core scheduling active it is necessary to move multiple cpus at
the same time to or from a cpupool in order to avoid split scheduling
resources in between.
Juergen Gross [Wed, 2 Oct 2019 07:27:41 +0000 (09:27 +0200)]
xen/sched: support differing granularity in schedule_cpu_[add/rm]()
With core scheduling active schedule_cpu_[add/rm]() has to cope with
different scheduling granularity: a cpu not in any cpupool is subject
to granularity 1 (cpu scheduling), while a cpu in a cpupool might be
in a scheduling resource with more than one cpu.
Handle that by having arrays of old/new pdata and vdata and loop over
those where appropriate.
Additionally the scheduling resource(s) must either be merged or
split.
Juergen Gross [Wed, 2 Oct 2019 07:27:39 +0000 (09:27 +0200)]
xen/sched: protect scheduling resource via rcu
In order to be able to move cpus to cpupools with core scheduling
active it is mandatory to merge multiple cpus into one scheduling
resource or to split a scheduling resource with multiple cpus in it
into multiple scheduling resources. This in turn requires to modify
the cpu <-> scheduling resource relation. In order to be able to free
unused resources protect struct sched_resource via RCU. This ensures
there are no users left when freeing such a resource.
On- and offlining cpus with core scheduling is rather complicated as
the cpus are taken on- or offline one by one, but scheduling wants them
rather to be handled per core.
As the future plan is to be able to select scheduling granularity per
cpupool prepare that by storing the granularity in struct
sched_resource (we need it there for free cpus which are not
associated to any cpupool). Free cpus will always use granularity 1.
Store the selected granularity option (cpu, core or socket) in the
cpupool , as we will need it to select the appropriate cpu mask when
populating the cpupool with cpus.
This will make on- and offlining of cpus much easier and avoids
writing code which would needed to be thrown away later.
Move the granularity related variables to cpupool.c as they are now
used form there only.
Juergen Gross [Wed, 2 Oct 2019 14:43:30 +0000 (16:43 +0200)]
xen/sched: make vcpu_wake() and vcpu_sleep() core scheduling aware
vcpu_wake() and vcpu_sleep() need to be made core scheduling aware:
they might need to switch a single vcpu of an already scheduled unit
between running and not running.
Especially when vcpu_sleep() for a vcpu is being called by a vcpu of
the same scheduling unit special care must be taken in order to avoid
a deadlock: the vcpu to be put asleep must be forced through a
context switch without doing so for the calling vcpu. For this
purpose add a vcpu flag handled in sched_slave() and in
sched_wait_rendezvous_in() allowing a vcpu of the currently running
unit to switch state at a higher priority than a normal schedule
event.
Use the same mechanism when waking up a vcpu of a currently active
unit.
While at it make vcpu_sleep_nosync_locked() static as it is used in
schedule.c only.
Juergen Gross [Wed, 2 Oct 2019 07:27:32 +0000 (09:27 +0200)]
xen/sched: add fall back to idle vcpu when scheduling unit
When scheduling an unit with multiple vcpus there is no guarantee all
vcpus are available (e.g. above maxvcpus or vcpu offline). Fall back to
idle vcpu of the current cpu in that case. This requires to store the
correct schedule_unit pointer in the idle vcpu as long as it used as
fallback vcpu.
In order to modify the runstates of the correct vcpus when switching
schedule units merge sched_unit_runstate_change() into
sched_switch_units() and loop over the affected physical cpus instead
of the unit's vcpus. This in turn requires an access function to the
current variable of other cpus.
Today context_saved() is called in case previous and next vcpus differ
when doing a context switch. With an idle vcpu being capable to be a
substitute for an offline vcpu this is problematic when switching to
an idle scheduling unit. An idle previous vcpu leaves us in doubt which
schedule unit was active previously, so save the previous unit pointer
in the per-schedule resource area. If it is NULL the unit has not
changed and we don't have to set the previous unit to be not running.
When running an idle vcpu in a non-idle scheduling unit use a specific
guest idle loop not performing any non-softirq tasklets and
livepatching in order to avoid populating the cpu caches with memory
used by other domains (as far as possible). Softirqs are considered to
be save.
In order to avoid livepatching when going to guest idle another
variant of reset_stack_and_jump() not calling check_for_livepatch_work
is needed.
Juergen Gross [Wed, 2 Oct 2019 07:27:31 +0000 (09:27 +0200)]
xen/sched: add a percpu resource index
Add a percpu variable holding the index of the cpu in the current
sched_resource structure. This index is used to get the correct vcpu
of a sched_unit on a specific cpu.
For now this index will be zero for all cpus, but with core scheduling
it will be possible to have higher values, too.
Juergen Gross [Wed, 2 Oct 2019 07:27:30 +0000 (09:27 +0200)]
xen/sched: support allocating multiple vcpus into one sched unit
With a scheduling granularity greater than 1 multiple vcpus share the
same struct sched_unit. Support that.
Setting the initial processor must be done carefully: we can't use
sched_set_res() as that relies on for_each_sched_unit_vcpu() which in
turn needs the vcpu already as a member of the domain's vcpu linked
list, which isn't the case.
Juergen Gross [Wed, 2 Oct 2019 07:27:29 +0000 (09:27 +0200)]
xen/sched: modify cpupool_domain_cpumask() to be an unit mask
cpupool_domain_cpumask() is used by scheduling to select cpus or to
iterate over cpus. In order to support scheduling units spanning
multiple cpus rename cpupool_domain_cpumask() to
cpupool_domain_master_cpumask() and let it return a cpumask with only
one bit set per scheduling resource.
Juergen Gross [Wed, 2 Oct 2019 07:27:28 +0000 (09:27 +0200)]
xen/sched: add support for multiple vcpus per sched unit where missing
In several places there is support for multiple vcpus per sched unit
missing. Add that missing support (with the exception of initial
allocation) and missing helpers for that.
Juergen Gross [Wed, 2 Oct 2019 07:27:27 +0000 (09:27 +0200)]
xen/sched: introduce unit_runnable_state()
Today the vcpu runstate of a new scheduled vcpu is always set to
"running" even if at that time vcpu_runnable() is already returning
false due to a race (e.g. with pausing the vcpu).
With core scheduling this can no longer work as not all vcpus of a
schedule unit have to be "running" when being scheduled. So the vcpu's
new runstate has to be selected at the same time as the runnability of
the related schedule unit is probed.
For this purpose introduce a new helper unit_runnable_state() which
will save the new runstate of all tested vcpus in a new field of the
vcpu struct.
Juergen Gross [Wed, 2 Oct 2019 07:27:26 +0000 (09:27 +0200)]
xen/sched: add code to sync scheduling of all vcpus of a sched unit
When switching sched units synchronize all vcpus of the new unit to be
scheduled at the same time.
A variable sched_granularity is added which holds the number of vcpus
per schedule unit.
As tasklets require to schedule the idle unit it is required to set the
tasklet_work_scheduled parameter of do_schedule() to true if any cpu
covered by the current schedule() call has any pending tasklet work.
For joining other vcpus of the schedule unit we need to add a new
softirq SCHED_SLAVE_SOFTIRQ in order to have a way to initiate a
context switch without calling the generic schedule() function
selecting the vcpu to switch to, as we already know which vcpu we
want to run. This has the other advantage not to loose any other
concurrent SCHEDULE_SOFTIRQ events.
Julien Grall [Tue, 26 Mar 2019 21:31:16 +0000 (21:31 +0000)]
xen/arm: traps: Mark check_stack_alignment_constraints as unused
Clang will throw an error if a function is unused unless you tell
to ignore it. This can be done using __maybe_unused.
While modifying the declaration, update it to match prototype of similar
functions (see build_assertions). This helps to understand that the sole
purpose of the function is to hold BUILD_BUG_ON().
Julien Grall [Wed, 27 Mar 2019 18:23:11 +0000 (18:23 +0000)]
xen/arm: mm: Mark check_memory_layout_alignment_constraints as unused
Clang will throw an error if a function is unused unless you tell
to ignore it. This can be done using __maybe_unused.
While modifying the declaration, update it to match prototype of similar
functions (see build_assertions). This helps to understand that the sole
purpose of the function is to hold BUILD_BUG_ON().
Julien Grall [Tue, 26 Mar 2019 21:26:57 +0000 (21:26 +0000)]
xen/arm: cpufeature: Match register size with value size in cpus_have_const_cap
Clang is pickier than GCC for the register size in asm statement. It
expects the register size to match the value size.
The asm statement expects a 32-bit (resp. 64-bit) value on Arm32
(resp. Arm64) whereas the value is a boolean (Clang consider to be
32-bit).
It would be possible to impose 32-bit register for both architecture
but this require the code to use __OP32. However, it does no really
improve the assembly generated. Instead, replace switch the variable to
use register_t.
Julien Grall [Tue, 26 Mar 2019 20:53:09 +0000 (20:53 +0000)]
xen/arm: cpuerrata: Match register size with value size in check_workaround_*
Clang is pickier than GCC for the register size in asm statement. It
expects the register size to match the value size.
The asm statement expects a 32-bit (resp. 64-bit) value on Arm32
(resp. Arm64) whereas the value is a boolean (Clang consider to be
32-bit).
It would be possible to impose 32-bit register for both architecture
but this require the code to use __OP32. However, it does not really
improve the assembly generated. Instead, replace switch the variable
to use register_t.
Julien Grall [Tue, 26 Mar 2019 20:30:05 +0000 (20:30 +0000)]
xen/arm64: bitops: Match the register size with the value size in flsl
Clang is pickier than GCC for the register size in asm statement. It expects
the register size to match the value size.
The instruction clz is expecting the two operands to be the same size
(i.e 32-bit or 64-bit). As the flsl function is dealing with 64-bit
value, we need to make the destination variable 64-bit as well.
While at it, add a newline before the return statement.
Note that the return type of flsl is not updated because the result will
always be smaller than 64 and therefore fit in 32-bit.
Julien Grall [Tue, 26 Mar 2019 21:52:20 +0000 (21:52 +0000)]
xen/arm: fix get_cpu_info() when built with clang
Clang understands the GCCism in use here, but still complains that sp is
unitialised. In such cases, resort to the older versions of this code,
which directly read sp into the temporary variable.
Note that GCCism is still kept in default because other compilers (e.g.
clang) may also define __GNUC__, so AFAIK there are no proper way to
detect properly GCC.
This means that in the event Xen is ported to a new compiler, the code
will need to be updated. But that likely not going to be the only place
where Xen will need to be adapted...
Ian Jackson [Wed, 2 Oct 2019 15:55:47 +0000 (16:55 +0100)]
libxl: create: style: Add a pair of missing { ]
From CODING_STYLE:
Every indented statement is braced, but blocks that contain just one
statement may have the braces omitted. To avoid confusion, either all
the blocks in an if...else chain have braces, or none of them do.
CC: Paul Durrant <paul.durrant@citrix.com> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
This is because libxl__domain_create_info_setdefault() currently only sets
an appropriate value for 'passthrough' in the case that 'cap_hvm_directio'
is true, which is not the case unless an IOMMU is present and enabled in
the system. This is normally masked by xl choosing a default value, but
that will not happen if xl is not used (e.g. when using libvirt) or when
a stub domain is being created.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Paul Durrant [Tue, 1 Oct 2019 14:57:13 +0000 (15:57 +0100)]
libxl: replace 'enabled' with 'unknown' in libxl_passthrough enumeration
This is mostly a cosmetic patch to avoid the default enumeration value
being 'enabled'. The only non-cosmetic parts are in xl_parse.c where it now
becomes necessary to explicitly parse the 'enabled' value for xl.cfg
'passthrough' option, and error on the value 'unknown', because there is no
longer a direct mapping between valid xl.cfg values and the enumeration.
Suggested-by: Ian Jackson <ian.jackson@eu.citrix.com> Signed-off-by: Paul Durrant <paul.durrant@citrix.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Roger Pau Monne [Tue, 1 Oct 2019 15:22:33 +0000 (17:22 +0200)]
libxl: wait for the ack when issuing power control requests
Currently only suspend power control requests wait for an ack from the
domain, while power off or reboot requests simply write the command to
xenstore and exit.
Introduce a 1 minute wait for the domain to acknowledge the request, or
else return an error. The suspend code is slightly modified to use the
new infrastructure added, but shouldn't have any functional change.
Fix the ocaml bindings and also provide a backwards compatible
interface for the reboot and poweroff libxl API functions.
Reported-by: Ross Lagerwall <ross.lagerwall@citrix.com> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Anthony PERARD <anthony.perard@citrix.com> Release-acked-by: Juergen Gross <jgross@suse.com> Acked-by: Christian Lindig <christian.lindig@citrix.com>
[ wei: change ret to rc to fix build ] Signed-off-by: Wei Liu <wl@xen.org>
Jan Beulich [Wed, 2 Oct 2019 11:38:02 +0000 (13:38 +0200)]
tools/xen-cpuid: avoid producing bogus output
I was (mistakenly, as - looking at the code - it's clearly not intended
to work) passing the tool "Raw" and "Host" as command line arguments.
Avoid printing just "Raw " with not even a newline at the end in
such a case. Instead report what wasn't understood by the parsing logic.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Wei Liu <wl@xen.org> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com> Release-acked-by: Juergen Gross <jgross@suse.com>
Jan Beulich [Wed, 2 Oct 2019 11:37:43 +0000 (13:37 +0200)]
MAINTAINERS: add tools/misc/xen-cpuid to "X86 ARCHITECTURE"
Along the lines of other x86-specific pieces under tools/.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Wei Liu <wl@xen.org> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com> Release-acked-by: Juergen Gross <jgross@suse.com>
Sergey Dyasli [Wed, 2 Oct 2019 11:35:44 +0000 (13:35 +0200)]
microcode: rendezvous CPUs in NMI handler and load ucode
When one core is loading ucode, handling NMI on sibling threads or
on other cores in the system might be problematic. By rendezvousing
all CPUs in NMI handler, it prevents NMI acceptance during ucode
loading.
Basically, some work previously done in stop_machine context is
moved to NMI handler. Primary threads call in and load ucode in
NMI handler. Secondary threads wait for the completion of ucode
loading on all CPU cores. An option is introduced to disable this
behavior.
Control thread doesn't rendezvous in NMI handler by calling self_nmi()
(in case of unknown_nmi_error() being triggered). The side effect is
control thread might be handling an NMI while other threads are loading
ucode. If an ucode is to update something shared by a whole socket,
control thread may be accessing things that are being updating by the
ucode loading on other cores. It is not safe. Update ucode on the
control thread first to mitigate this issue.
Igor Druzhinin [Tue, 1 Oct 2019 19:15:57 +0000 (20:15 +0100)]
x86/crash: force unlock console before printing on kexec crash
There is a small window where shootdown NMI might come to a CPU
(e.g. in serial interrupt handler) where console lock is taken. In order
not to leave following console prints waiting infinitely for shot down
CPUs to free the lock - force unlock the console.
The race has been frequently observed while crashing nested Xen in
an HVM domain.
Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Release-acked-by: Juergen Gross <jgross@suse.com>
xen/arm: Implement workaround for Cortex A-57 and Cortex A72 AT speculate
Both Cortex-A57 (erratum 1319537) and Cortex-A72 (erratum 1319367) can
end with corrupted TLBs if they speculate an AT instruction while S1/S2
system registers in inconsistent state.
The workaround is the same as for Cortex A-76 implemented by commit a18be06aca "xen/arm: Implement workaround for Cortex-A76 erratum 1165522",
so it is only necessary to plumb in the cpuerrata framework.
Julien Grall [Wed, 21 Aug 2019 21:42:31 +0000 (22:42 +0100)]
xen/arm: domain_build: Don't continue if unable to allocate all dom0 banks
Xen will only print a warning if there are memory unallocated when using
1:1 mapping (only used by dom0). This also includes the case where no
memory has been allocated.
It will bring to all sort of issues that can be hard to diagnostic for
users (the warning can be difficult to spot or disregard).
If the users request 1GB of memory, then most likely they want the exact
amount and not 512MB. So panic if all the memory has not been allocated.
After this change, the behavior is the same as for non-1:1 memory
allocation (used by domU).
At the same time, reflow the message to have the format on a single
line.
Julien Grall [Fri, 9 Aug 2019 12:59:15 +0000 (13:59 +0100)]
xen/arm: p2m: Free the p2m entry after flushing the IOMMU TLBs
When freeing a p2m entry, all the sub-tree behind it will also be freed.
This may include intermediate page-tables or any l3 entry requiring to
drop a reference (e.g for foreign pages). As soon as pages are freed,
they may be re-used by Xen or another domain. Therefore it is necessary
to flush *all* the TLBs beforehand.
While CPU TLBs will be flushed before freeing the pages, this is not
the case for IOMMU TLBs. This can be solved by moving the IOMMU TLBs
flush earlier in the code.
This wasn't considered as a security issue as device passthrough on Arm
is not security supported.
xen/arm: domain_build: Avoid implicit conversion from ULL to UL
Clang 8.0 will fail to build domain_build.c on Arm32 because of the
following error:
domain_build.c:448:21: error: implicit conversion from 'unsigned long long' to 'unsigned long' changes value from 1090921693184 to 0
[-Werror,-Wconstant-conversion]
bank_size = MIN(GUEST_RAM1_SIZE, kinfo->unassigned_mem);
Arm32 is able to support more than 4GB of physical memory, so it would
be theorically possible to create domain with more the 4GB of RAM.
Therefore, the size of a bank may not fit in 32-bit.
This can be resolved by switch the variable bank_size and the parameter
tot_size to "paddr_t".
GAS 2.25.0 throws multiple errors when building arm32/head.S:
arm32/head.S: Assembler messages:
arm32/head.S:452: Error: invalid constant (f7f) after fixup
arm32/head.S:453: Error: invalid constant (f7f) after fixup
arm32/head.S:495: Error: invalid constant (f7f) after fixup
arm32/head.S:510: Error: invalid constant (f7f) after fixup
arm32/head.S:514: Error: invalid constant (f7f) after fixup
arm32/head.S:516: Error: invalid constant (f7f) after fixup
arm32/head.S:633: Error: invalid constant (f7f) after fixup
This makes sense because the instruction mov is only able to deal with a
specific set of immediate (see "modified immediate constants in ARM
instructions"). For any 16-bit immediate, the instruction movw should be
used.
It looks like newer version of GAS will seemly switch to movw if the
immediate does not fit in the immediate encoding for mov. But we should
not rely on this. So switch to movw.
Fixes: 23dfe48d10 ("xen/arm32: head: Introduce macros to create table and mapping entry") Reported-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Julien Grall <julien.grall@arm.com> Tested-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Jan Beulich [Mon, 30 Sep 2019 13:46:24 +0000 (15:46 +0200)]
x86: correct bogus error indicator of cpu_add()
Commit 54ce2db8b8 ("x86/numa: adjust datatypes for node and pxm")
changed this from the -1 (i.e. -EPERM, which was already bogus) that
comes back from setup_node() to NUMA_NO_NODE (0xff). Use a proper error
indicator instead.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com> Release-acked-by: Juergen Gross <jgross@suse.com>
Jan Beulich [Mon, 30 Sep 2019 13:45:16 +0000 (15:45 +0200)]
x86emul: move ARPL #UD check
The #UD for being outside of protected mode gets raised for ARPL only
after having read the memory operand - correct this by moving up the
respective construct.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Release-acked-by: Juergen Gross <jgross@suse.com>
Jan Beulich [Tue, 3 Sep 2019 13:58:08 +0000 (15:58 +0200)]
ns16550: make PCI device hiding uniform
The difference between pci_hide_device() and pci_ro_device() is that
the former only prevents a device from getting assigned to a guest,
while the latter additionally arranges for Dom0 write attempts to the
device's config space to be ignored/discarded. Whether we want one or
the other certainly doesn't depend on whether the device is in our set
of known devices. All that matters is whether we use a PCI device: Call
pci_ro_device() in any such case.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Acked-by: Julien Grall <julien.grall@arm.com>
xen/sched: move struct task_slice into struct sched_unit
In order to prepare for multiple vcpus per schedule unit move struct
task_slice in schedule() from the local stack into struct sched_unit
of the currently running unit. To make access easier for the single
schedulers add the pointer of the currently running unit as a parameter
of do_schedule().
While at it switch the tasklet_work_scheduled parameter of
do_schedule() from bool_t to bool.
As struct task_slice is only ever modified with the local schedule
lock held it is safe to directly set the different units in struct
sched_unit instead of using an on-stack copy for returning the data.
xen/sched: Change vcpu_migrate_*() to operate on schedule unit
vcpu_migrate_start() and vcpu_migrate_finish() are used only to ensure
a vcpu is running on a suitable processor, so they can be switched to
operate on schedule units instead of vcpus.
While doing that rename them accordingly.
Call vcpu_sync_execstate() for each vcpu of the unit when changing
processors in order to make that an explicit action (otherwise this
would happen later when either the vcpu is scheduled on the new
processor or another non-idle vcpu is scheduled on the old processor).
vcpu_move_locked() is switched to schedule unit, too.