The correct A20 masking is done if paging is enabled (protected mode) but it
seems to have been forgotten in real mode. For example from the AMD64 APM Vol. 2
section 1.2.4:
> If the sum of the segment base and effective address carries over into bit 20,
> that bit can be optionally truncated to mimic the 20-bit address wrapping of the
> 8086 processor by using the A20M# input signal to mask the A20 address bit.
Most BIOSes will enable the A20 line on boot, but I found by disabling the A20 line
afterwards, the correct wrapping wasn't taking place.
`handle_mmu_fault' in target/i386/tcg/sysemu/excp_helper.c seems to be the culprit.
In real mode, it fills the TLB with the raw unmasked address. However, for the
protected mode, the `mmu_translate' function does the correct A20 masking.
The fix then should be to just apply the A20 mask in the first branch of the if
statement.
Signed-off-by: Stephen Michael Jothen <sjothen@gmail.com>
Message-Id: <Yo5MUMSz80jXtvt9@air-old.local>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
CPUX86State *env = &cpu->env;
int error_code = PG_ERROR_OK;
int pg_mode, prot, page_size;
+ int32_t a20_mask;
hwaddr paddr;
hwaddr vaddr;
#endif
if (!(env->cr[0] & CR0_PG_MASK)) {
- paddr = addr;
+ a20_mask = x86_get_a20_mask(env);
+ paddr = addr & a20_mask;
#ifdef TARGET_X86_64
if (!(env->hflags & HF_LMA_MASK)) {
/* Without long mode we can only address 32bits in real mode */