When __p2m_get_mem_access gets called, the p2m lock is already taken
by either get_page_from_gva or p2m_get_mem_access.
Possible code paths:
1) -> get_page_from_gva
-> p2m_mem_access_check_and_get_page
-> __p2m_get_mem_access
2) -> p2m_get_mem_access
-> __p2m_get_mem_access
In both cases if __p2m_get_mem_access subsequently gets to
call p2m_lookup (happens if !radix_tree_lookup(...)), a hypervisor
hang will occur, since p2m_lookup also spin-locks on the p2m lock.
This bug-fix simply replaces the p2m_lookup call from __p2m_get_mem_access
with a call to __p2m_lookup.
Following Ian's suggestion, we also add an ASSERT to ensure that
the p2m lock is taken upon __p2m_get_mem_access entry.
Signed-off-by: Corneliu ZUZU <czuzu@bitdefender.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
#undef ACCESS
};
+ ASSERT(spin_is_locked(&p2m->lock));
+
/* If no setting was ever set, just return rwx. */
if ( !p2m->mem_access_enabled )
{
* No setting was found in the Radix tree. Check if the
* entry exists in the page-tables.
*/
- paddr_t maddr = p2m_lookup(d, gfn_x(gfn) << PAGE_SHIFT, NULL);
+ paddr_t maddr = __p2m_lookup(d, gfn_x(gfn) << PAGE_SHIFT, NULL);
if ( INVALID_PADDR == maddr )
return -ESRCH;