Initially I had just noticed the unnecessary indirection in the call
from pi_update_irte(). The generic wrapper having an iommu_intremap
conditional made me look at the setup code though. So first of all
enforce the necessary dependency.
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
spin_unlock_irq(&desc->lock);
ASSERT(pcidevs_locked());
- return iommu_update_ire_from_msi(msi_desc, &msi_desc->msg);
+
+ return msi_msg_write_remap_rte(msi_desc, &msi_desc->msg);
unlock_out:
spin_unlock_irq(&desc->lock);
* not supported, since we count on this feature to
* atomically update 16-byte IRTE in posted format.
*/
- if ( !cap_intr_post(iommu->cap) || !cpu_has_cx16 )
+ if ( !cap_intr_post(iommu->cap) || !iommu_intremap || !cpu_has_cx16 )
iommu_intpost = 0;
if ( !vtd_ept_page_compatible(iommu) )