handle_gdt_ldt_mapping_fault() is intended to deal with indirect
accesses (i.e. those caused by descriptor loads) to the GDT/LDT
mapping area only. While for 32-bit segment limits indeed prevent the
function being entered for direct accesses (i.e. a #GP fault will be
raised even before the address translation gets done, on 64-bit even
user mode accesses would lead to control reaching the BUG_ON() at the
beginning of that function.
Fortunately the fix is simple: Since the guest kernel runs in ring 3,
any guest direct access will have the "user mode" bit set, whereas
descriptor loads always do the translations to access the actual
descriptors as kernel mode ones.
Signed-off-by: Jan Beulich <jbeulich@novell.com>
Further, relax the BUG_ON() in handle_gdt_ldt_mapping_fault() to a
check-and-bail. This avoids any problems in future, if we don't
execute x86_64 guest kernels in ring 3 (e.g., because we use a
lightweight HVM container).
Signed-off-by: Keir Fraser <keir@xen.org>
xen-unstable changeset: 22448:
5cd9612db2bb
xen-unstable date: Mon Nov 29 14:34:32 2010 +0000
unsigned int is_ldt_area = (offset >> (GDT_LDT_VCPU_VA_SHIFT-1)) & 1;
unsigned int vcpu_area = (offset >> GDT_LDT_VCPU_VA_SHIFT);
- /* Should never fault in another vcpu's area. */
- BUG_ON(vcpu_area != curr->vcpu_id);
+ /*
+ * If the fault is in another vcpu's area, it cannot be due to
+ * a GDT/LDT descriptor load. Thus we can reasonably exit immediately, and
+ * indeed we have to since map_ldt_shadow_page() works correctly only on
+ * accesses to a vcpu's own area.
+ */
+ if ( vcpu_area != curr->vcpu_id )
+ return 0;
/* Byte offset within the gdt/ldt sub-area. */
offset &= (1UL << (GDT_LDT_VCPU_VA_SHIFT-1)) - 1UL;
if ( unlikely(IN_HYPERVISOR_RANGE(addr)) )
{
- if ( !(regs->error_code & PFEC_reserved_bit) &&
+ if ( !(regs->error_code & (PFEC_user_mode | PFEC_reserved_bit)) &&
(addr >= GDT_LDT_VIRT_START) && (addr < GDT_LDT_VIRT_END) )
return handle_gdt_ldt_mapping_fault(
addr - GDT_LDT_VIRT_START, regs);