If a VM has all its PCI devices deassigned, need_iommu(d) becomes
false but it might still have DPCI EOI timers that were init_timer()d
but not yet kill_timer()d. That causes xen to crash later because the
linked list of inactive timers gets corrupted, e.g.:
(XEN) Xen call trace:
(XEN) [<
ffff82c480126256>] set_timer+0x1c2/0x24f
(XEN) [<
ffff82c48011fbf8>] schedule+0x129/0x5dd
(XEN) [<
ffff82c480122c1e>] __do_softirq+0x7e/0x89
(XEN) [<
ffff82c480122c9d>] do_softirq+0x26/0x28
(XEN) [<
ffff82c480153c85>] idle_loop+0x5a/0x5c
(XEN)
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Assertion 'entry->next->prev == entry' failed at
/local/scratch/tdeegan/xen-unstable.hg/xen/include:172
(XEN) ****************************************
The following patch makes sure that the domain destruction path always
clears up the DPCI state even if !needs_iommu(d).
Signed-off-by: Tim Deegan <Tim.Deegan@citrix.com>
xen-unstable changeset: 23746:
aa54b8175954
xen-unstable date: Mon Jul 25 16:41:33 2011 +0100
if ( !iommu_enabled )
return;
- if ( !need_iommu(d) )
- return;
-
spin_lock(&d->event_lock);
hvm_irq_dpci = domain_get_irq_dpci(d);
if ( hvm_irq_dpci != NULL )