Commit
e9aca9470ed86 introduced a regression when avoiding sending
IPIs for certain flush operations. Xen page fault handler
(spurious_page_fault) relies on blocking interrupts in order to
prevent handling TLB flush IPIs and thus preventing other CPUs from
removing page tables pages. Switching to assisted flushing avoided such
IPIs, and thus can result in pages belonging to the page tables being
removed (and possibly re-used) while __page_fault_type is being
executed.
Force some of the TLB flushes to use IPIs, thus avoiding the assisted
TLB flush. Those selected flushes are the page type change (when
switching from a page table type to a different one, ie: a page that
has been removed as a page table) and page allocation. This sadly has
a negative performance impact on the pvshim, as less assisted flushes
can be used. Note the flush in grant-table code is also switched to
use an IPI even when not strictly needed. This is done so that a
common arch_flush_tlb_mask can be introduced and always used in common
code.
Introduce a new flag (FLUSH_FORCE_IPI) and helper to force a TLB flush
using an IPI (x86 only). Note that the flag is only meaningfully defined
when the hypervisor supports PV or shadow paging mode, as otherwise
hardware assisted paging domains are in charge of their page tables and
won't share page tables with Xen, thus not influencing the result of
page walks performed by the spurious fault handler.
Just passing this new flag when calling flush_area_mask prevents the
usage of the assisted flush without any other side effects.
Note the flag is not defined on Arm.
Fixes: e9aca9470ed86 ('x86/tlb: use Xen L0 assisted TLB flush when available')
Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Julien Grall <jgrall@amazon.com>
Release-acked-by: Paul Durrant <paul@xen.org>
#include <asm/gic.h>
#include <asm/flushtlb.h>
-void flush_tlb_mask(const cpumask_t *mask)
+void arch_flush_tlb_mask(const cpumask_t *mask)
{
/* No need to IPI other processors on ARM, the processor takes care of it. */
flush_all_guests_tlb();
((nx & PGT_type_mask) == PGT_writable_page)) )
{
perfc_incr(need_flush_tlb_flush);
- flush_tlb_mask(mask);
+ /*
+ * If page was a page table make sure the flush is
+ * performed using an IPI in order to avoid changing the
+ * type of a page table page under the feet of
+ * spurious_page_fault().
+ */
+ flush_mask(mask,
+ (x & PGT_type_mask) &&
+ (x & PGT_type_mask) <= PGT_root_page_table
+ ? FLUSH_TLB | FLUSH_FORCE_IPI
+ : FLUSH_TLB);
}
/* We lose existing type and validity. */
static inline void gnttab_flush_tlb(const struct domain *d)
{
if ( !paging_mode_external(d) )
- flush_tlb_mask(d->dirty_cpumask);
+ arch_flush_tlb_mask(d->dirty_cpumask);
}
static inline unsigned int
#endif
/* Flush specified CPUs' TLBs */
-void flush_tlb_mask(const cpumask_t *mask);
+void arch_flush_tlb_mask(const cpumask_t *mask);
/*
* Flush a range of VA's hypervisor mappings from the TLB of the local
#else
#define FLUSH_HVM_ASID_CORE 0
#endif
+#if defined(CONFIG_PV) || defined(CONFIG_SHADOW_PAGING)
+/*
+ * Force an IPI to be sent. Note that adding this to the flags passed to
+ * flush_area_mask will prevent using the assisted flush without having any
+ * other side effect.
+ */
+# define FLUSH_FORCE_IPI 0x8000
+#else
+# define FLUSH_FORCE_IPI 0
+#endif
/* Flush local TLBs/caches. */
unsigned int flush_area_local(const void *va, unsigned int flags);
#define flush_tlb_one_mask(mask,v) \
flush_area_mask(mask, (const void *)(v), FLUSH_TLB|FLUSH_ORDER(0))
+/*
+ * Make the common code TLB flush helper force use of an IPI in order to be
+ * on the safe side. Note that not all calls from common code strictly require
+ * this.
+ */
+#define arch_flush_tlb_mask(mask) flush_mask(mask, FLUSH_TLB | FLUSH_FORCE_IPI)
+
/* Flush all CPUs' TLBs */
#define flush_tlb_all() \
flush_tlb_mask(&cpu_online_map)
if ( !cpumask_empty(&mask) )
{
perfc_incr(need_flush_tlb_flush);
- flush_tlb_mask(&mask);
+ arch_flush_tlb_mask(&mask);
}
}