Callers of x86_emulate() generally define addr_size based on the code
segment. In vm86 mode, the code segment is set by the hardware to be
16-bits; but it is entirely possible to enable protected mode, set the
CS to 32-bits, and then disable protected mode. (This is commonly
called "unreal mode".)
But the instruction decoder only checks for protected mode when
addr_size == 16. So in unreal mode, hardware will throw a #UD for VEX
prefixes, but our instruction decoder will decode them, triggering an
ASSERT() further on in _get_fpu(). (With debug=n the emulator will
incorrectly emulate the instruction rather than throwing a #UD, but
this is only a bug, not a crash, so it's not a security issue.)
Teach the instruction decoder to check that we're in protected mode,
even if addr_size is 32.
Signed-off-by: George Dunlap <george.dunlap@citrix.com>
Split real mode and VM86 mode handling, as VM86 mode is strictly 16-bit
at all times. Re-base.
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
master commit:
05118b1596ffe4559549edbb28bd0124a7316123
master date: 2017-01-25 15:09:55 +0100