Ever since it was introduced in c/s
bd1f0b45ff, hvm_save_cpu_msrs() has had a
bug whereby it corrupts the HVM context stream if some, but fewer than the
maximum number of MSRs are written.
_hvm_init_entry() creates an hvm_save_descriptor with length for
msr_count_max, but in the case that we write fewer than max, h->cur only moves
forward by the amount of space used, causing the subsequent
hvm_save_descriptor to be written within the bounds of the previous one.
To resolve this, reduce the length reported by the descriptor to match the
actual number of bytes used.
A typical failure on the destination side looks like:
(XEN) HVM4 restore: CPU_MSR 0
(XEN) HVM4.0 restore: not enough data left to read 56 MSR bytes
(XEN) HVM4 restore: failed to load entry 20/0
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Release-acked-by: Julien Grall <julien.grall@linaro.org>
for_each_vcpu ( d, v )
{
+ struct hvm_save_descriptor *d = _p(&h->data[h->cur]);
struct hvm_msr *ctxt;
unsigned int i;
ctxt->msr[i]._rsvd = 0;
if ( ctxt->count )
+ {
+ /* Rewrite length to indicate how much space we actually used. */
+ d->length = HVM_CPU_MSR_SIZE(ctxt->count);
h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
+ }
else
+ /* or rewind and remove the descriptor from the stream. */
h->cur -= sizeof(struct hvm_save_descriptor);
}