All non-MMIO ranges (i.e those not mapping real device MMIO regions) that
map valid MFNs are normally marked MTRR_TYPE_WRBACK and 'ipat' is set. Hence
when PV drivers running in a guest populate the BAR space of the Xen Platform
PCI Device with pages such as the Shared Info page or Grant Table pages,
accesses to these pages will be cachable.
However, should IOMMU mappings be enabled be enabled for the guest then these
accesses become uncachable. This has a substantial negative effect on I/O
throughput of PV devices. Arguably PV drivers should bot be using BAR space to
host the Shared Info and Grant Table pages but it is currently commonplace for
them to do this and so this problem needs mitigation. Hence this patch makes
sure the 'ipat' bit is set for any special page regardless of where in GFN
space it is mapped.
NOTE: Clearly this mitigation only applies to Intel EPT. It is not obvious
that there is any similar mitigation possible for AMD NPT. Downstreams
such as Citrix XenServer have been carrying a patch similar to this for
several releases though.
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
{
int gmtrr_mtype, hmtrr_mtype;
struct vcpu *v = current;
+ unsigned long i;
*ipat = 0;
return MTRR_TYPE_WRBACK;
}
+ for ( i = 0; i < (1ul << order); i++ )
+ {
+ if ( is_special_page(mfn_to_page(mfn_add(mfn, i))) )
+ {
+ if ( order )
+ return -1;
+ *ipat = 1;
+ return MTRR_TYPE_WRBACK;
+ }
+ }
+
gmtrr_mtype = hvm_get_mem_pinned_cacheattr(d, _gfn(gfn), order);
if ( gmtrr_mtype >= 0 )
{