When unmask_evtchn is called, if we already have an event pending, we
just set evtchn_pending_sel waiting for irq to be re-enabled. That is
because x86 pv guests overwrite the irq_enable pvops with
xen_irq_enable_direct that also handles pending events.
However x86 HVM guests and ARM guests do not change or do not have the
irq_enable pvop, so this scheme doesn't work properly for them.
Considering that having a pending irq when unmask_evtchn is called is
not very common, and it is better to keep the native_irq_enable
implementation for HVM guests and ARM guests, the best thing to do is
just using the EVTCHNOP_unmask callback (Xen re-injects pending
events in response).
Considering that this patch fixes a bug on x86 for current PV on HVM
guests, I'll resend it separately.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
BUG_ON(!irqs_disabled());
- /* Slow path (hypercall) if this is a non-local port. */
- if (unlikely(cpu != cpu_from_evtchn(port))) {
+ /* Slow path (hypercall) if this is a non-local port or if this is
+ * an hvm domain and an event is pending (hvm domains don't have
+ * their own implementation of irq_enable). */
+ if (unlikely((cpu != cpu_from_evtchn(port)) ||
+ (xen_hvm_domain() && sync_test_bit(port, &s->evtchn_pending[0])))) {
struct evtchn_unmask unmask = { .port = port };
(void)HYPERVISOR_event_channel_op(EVTCHNOP_unmask, &unmask);
} else {