When using tmem with Xen 4.3 (and debug build) we end up with:
(XEN) Xen BUG at domain_page.c:143
(XEN) ----[ Xen-4.3-unstable x86_64 debug=y Not tainted ]----
(XEN) CPU: 3
(XEN) RIP: e008:[<
ffff82c4c01606a7>] map_domain_page+0x61d/0x6e1
..
(XEN) Xen call trace:
(XEN) [<
ffff82c4c01606a7>] map_domain_page+0x61d/0x6e1
(XEN) [<
ffff82c4c01373de>] cli_get_page+0x15e/0x17b
(XEN) [<
ffff82c4c01377c4>] tmh_copy_from_client+0x150/0x284
(XEN) [<
ffff82c4c0135929>] do_tmem_put+0x323/0x5c4
(XEN) [<
ffff82c4c0136510>] do_tmem_op+0x5a0/0xbd0
(XEN) [<
ffff82c4c022391b>] syscall_enter+0xeb/0x145
(XEN)
A bit of debugging revealed that the map_domain_page and unmap_domain_page
are meant for short life-time mappings. And that those mappings are finite.
In the 2 VCPU guest we only have 32 entries and once we have exhausted those
we trigger the BUG_ON condition.
The two functions - tmh_persistent_pool_page_[get,put] are used by the xmem_pool
when xmem_pool_[alloc,free] are called. These xmem_pool_* function are wrapped
in macro and functions - the entry points are via: tmem_malloc
and tmem_page_alloc. In both cases the users are in the hypervisor and they
do not seem to suffer from using the hypervisor virtual addresses.
Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
if ( (pi = _tmh_alloc_page_thispool(d)) == NULL )
return NULL;
ASSERT(IS_VALID_PAGE(pi));
- return __map_domain_page(pi);
+ return page_to_virt(pi);
}
static void tmh_persistent_pool_page_put(void *page_va)
struct page_info *pi;
ASSERT(IS_PAGE_ALIGNED(page_va));
- pi = mfn_to_page(domain_page_map_to_mfn(page_va));
- unmap_domain_page(page_va);
+ pi = mfn_to_page(virt_to_mfn(page_va));
ASSERT(IS_VALID_PAGE(pi));
_tmh_free_page_thispool(pi);
}