vmap_to_mfn() uses virt_to_maddr(), which is designed to work with VA from
either the direct map region or Xen's linkage region (XEN_VIRT_START).
An assertion will occur if it is used with other regions, in particular for
the VMAP region.
Since RISC-V lacks a hardware feature to request the MMU to translate a VA to
a PA (as Arm does, for example), software page table walking (pt_walk()) is
used for the VMAP region to obtain the mfn from pte_t.
To avoid introduce a circular dependency between asm/mm.h and asm/page.h by
including each other, the static inline function _vmap_to_mfn() is introduced
in asm/page.h, as it uses struct pte_t and pte_is_mapping() from asm/page.h.
_vmap_to_mfn() is then reused in the definition of vmap_to_mfn() macro in
asm/mm.h.
Fixes: 7db8d2bd9b ("xen/riscv: add minimal stuff to mm.h to build full Xen")
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
#define gaddr_to_gfn(ga) _gfn(paddr_to_pfn(ga))
#define mfn_to_maddr(mfn) pfn_to_paddr(mfn_x(mfn))
#define maddr_to_mfn(ma) _mfn(paddr_to_pfn(ma))
-#define vmap_to_mfn(va) maddr_to_mfn(virt_to_maddr((vaddr_t)(va)))
+#define vmap_to_mfn(va) _vmap_to_mfn((vaddr_t)(va))
#define vmap_to_page(va) mfn_to_page(vmap_to_mfn(va))
static inline void *maddr_to_virt(paddr_t ma)
pte_t pt_walk(vaddr_t va, unsigned int *pte_level);
+static inline mfn_t _vmap_to_mfn(vaddr_t va)
+{
+ pte_t entry = pt_walk(va, NULL);
+
+ BUG_ON(!pte_is_mapping(entry));
+
+ return mfn_from_pte(entry);
+}
+
#endif /* __ASSEMBLY__ */
#endif /* ASM__RISCV__PAGE_H */