From: Jan Beulich Date: Tue, 12 Nov 2024 13:12:18 +0000 (+0100) Subject: x86/HVM: properly reject "indirect" VRAM writes X-Git-Tag: RELEASE-4.16.7~9 X-Git-Url: http://xenbits.xensource.com/gitweb?a=commitdiff_plain;h=134ec0ff63766dcac9eff17ae516c3134bcd33b7;p=xen.git x86/HVM: properly reject "indirect" VRAM writes While ->count will only be different from 1 for "indirect" (data in guest memory) accesses, it being 1 does not exclude the request being an "indirect" one. Check both to be on the safe side, and bring the ->count part also in line with what ioreq_send_buffered() actually refuses to handle. This is part of XSA-463 / CVE-2024-45818 Fixes: 3bbaaec09b1b ("x86/hvm: unify stdvga mmio intercept with standard mmio intercept") Signed-off-by: Jan Beulich Reviewed-by: Andrew Cooper (cherry picked from commit eb7cd0593d88c4b967a24bca8bd30591966676cd) --- diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c index b9d7b5a4d9..11f2a92d12 100644 --- a/xen/arch/x86/hvm/stdvga.c +++ b/xen/arch/x86/hvm/stdvga.c @@ -498,13 +498,13 @@ static bool_t stdvga_mem_accept(const struct hvm_io_handler *handler, spin_lock(&s->lock); - if ( p->dir == IOREQ_WRITE && p->count > 1 ) + if ( p->dir == IOREQ_WRITE && (p->data_is_ptr || p->count != 1) ) { /* * We cannot return X86EMUL_UNHANDLEABLE on anything other then the * first cycle of an I/O. So, since we cannot guarantee to always be * able to send buffered writes, we have to reject any multi-cycle - * I/O. + * or "indirect" I/O. */ goto reject; }