Commit
7d623b358a4 "arm/mem_access: Add long-descriptor based gpt"
assumed the read-write lock can be taken recursively. However, this
assumption is wrong and will lead to deadlock when the lock is
contended.
The read lock is taken recursively in the following case:
1) get_page_from_gva
=> Take the read lock (first read lock)
=> Call p2m_mem_access_check_and_get_page on failure when
memaccess is enabled
2) p2m_mem_access_check_and_get_page
=> If hardware translation failed fallback to software lookup
=> Call guest_walk_tables
3) guest_walk_tables
=> Will use access_guest_memory_by_ipa to access stage-1 page-table
4) access_guest_memory_by_ipa
=> Because Arm does not have hardware instruction to only do
stage-2 page-table, this is done in software.
=> Take the read lock (second read lock)
To avoid the nested lock, rework the locking in get_page_from_gva and
p2m_mem_access_check_and_get_page. The latter will now be called without
the p2m lock. The new locking in p2m_mem_accces_check_and_get_page will
not cover the translation of the VA to an IPA.
This is fine because we can't promise that the stage-1 page-table have
changed behind our back (they are under guest control). Modification in
the stage-2 page-table can now happen, but I can't issue any potential
issue here except with the break-before-make sequence used when updating
page-table. gva_to_ipa may fail if the sequence is executed at the same
on another CPU. In that case we would fallback in the software lookup
path.
Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Sergej Proskurin <proskurin@sec.in.tum.de>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
* is not mapped.
*/
if ( guest_walk_tables(v, gva, &ipa, &perms) < 0 )
- goto err;
+ return NULL;
/*
* Check permissions that are assumed by the caller. For instance in
* test for execute permissions this check can be left out.
*/
if ( (flag & GV2M_WRITE) && !(perms & GV2M_WRITE) )
- goto err;
+ return NULL;
}
gfn = gaddr_to_gfn(ipa);
+ p2m_read_lock(p2m);
+
/*
* We do this first as this is faster in the default case when no
* permission is set on the page.
page = NULL;
err:
+ p2m_read_unlock(p2m);
+
return page;
}
}
err:
+ p2m_read_unlock(p2m);
+
if ( !page && p2m->mem_access_enabled )
page = p2m_mem_access_check_and_get_page(va, flags, v);
- p2m_read_unlock(p2m);
-
return page;
}
struct p2m_domain {
/*
* Lock that protects updates to the p2m.
- *
- * Please note that we use this lock in a nested way by calling
- * access_guest_memory_by_ipa in guest_walk_(sd|ld). This must be
- * considered in the future implementation.
*/
rwlock_t lock;