Commit
96ae556569 ("x86/HVM: fix forwarding of internally cached
requests") wasn't quite complete: hvmemul_do_io() also needs to
propagate up the clipped count. (I really should have re-tested the
forward port resulting in the earlier change, instead of relying on the
testing done on the older version of Xen which the fix was first needed
for.)
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
if ( (p.type != (is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO)) ||
(p.addr != addr) ||
(p.size != size) ||
- (p.count != *reps) ||
+ (p.count > *reps) ||
(p.dir != dir) ||
(p.df != df) ||
(p.data_is_ptr != data_is_addr) )
if ( data_is_addr )
return X86EMUL_UNHANDLEABLE;
+
+ *reps = p.count;
goto finish_access;
default:
return X86EMUL_UNHANDLEABLE;
rc = hvm_io_intercept(&p);
+ /*
+ * p.count may have got reduced (see hvm_process_io_intercept()) - inform
+ * our callers and mirror this into latched state.
+ */
+ ASSERT(p.count <= *reps);
+ *reps = vio->io_req.count = p.count;
+
switch ( rc )
{
case X86EMUL_OKAY: