From: Jason Andryuk Date: Fri, 17 Jan 2020 15:19:16 +0000 (+0100) Subject: x86/shadow: use single (atomic) MOV for emulated writes X-Git-Tag: 4.14.0-rc1~772 X-Git-Url: http://xenbits.xensource.com/gitweb?a=commitdiff_plain;h=32772fbb3cf7498817304b53b087e325c6991716;p=xen.git x86/shadow: use single (atomic) MOV for emulated writes This is the corresponding change to the shadow code as made by bf08a8a08a2e "x86/HVM: use single (atomic) MOV for aligned emulated writes" to the non-shadow HVM code. The bf08a8a08a2e commit message: Using memcpy() may result in multiple individual byte accesses (depending how memcpy() is implemented and how the resulting insns, e.g. REP MOVSB, get carried out in hardware), which isn't what we want/need for carrying out guest insns as correctly as possible. Fall back to memcpy() only for accesses not 2, 4, or 8 bytes in size. Signed-off-by: Jason Andryuk Acked-by: Tim Deegan --- diff --git a/xen/arch/x86/mm/shadow/hvm.c b/xen/arch/x86/mm/shadow/hvm.c index 48dfad4557..a219266fa2 100644 --- a/xen/arch/x86/mm/shadow/hvm.c +++ b/xen/arch/x86/mm/shadow/hvm.c @@ -215,7 +215,15 @@ hvm_emulate_write(enum x86_segment seg, return ~PTR_ERR(ptr); paging_lock(v->domain); - memcpy(ptr, p_data, bytes); + + /* Where possible use single (and hence generally atomic) MOV insns. */ + switch ( bytes ) + { + case 2: write_u16_atomic(ptr, *(uint16_t *)p_data); break; + case 4: write_u32_atomic(ptr, *(uint32_t *)p_data); break; + case 8: write_u64_atomic(ptr, *(uint64_t *)p_data); break; + default: memcpy(ptr, p_data, bytes); break; + } if ( tb_init_done ) v->arch.paging.mode->shadow.trace_emul_write_val(ptr, addr,