While ->count will only be different from 1 for "indirect" (data in
guest memory) accesses, it being 1 does not exclude the request being an
"indirect" one. Check both to be on the safe side, and bring the ->count
part also in line with what ioreq_send_buffered() actually refuses to
handle.
Fixes: 3bbaaec09b1b ("x86/hvm: unify stdvga mmio intercept with standard mmio intercept")
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
spin_lock(&s->lock);
- if ( p->dir == IOREQ_WRITE && p->count > 1 )
+ if ( p->dir == IOREQ_WRITE && (p->data_is_ptr || p->count != 1) )
{
/*
* We cannot return X86EMUL_UNHANDLEABLE on anything other then the
* first cycle of an I/O. So, since we cannot guarantee to always be
* able to send buffered writes, we have to reject any multi-cycle
- * I/O.
+ * or "indirect" I/O.
*/
goto reject;
}