ia64/xen-unstable

view docs/misc/vtd.txt @ 19128:77393d4de777

[IA64] No need for cmpxchg on page_info structure.

Updates and checks on count_info and page owner can safely be
non-atomic.
This is ia64 counter part of 19088:055c589f4791.

Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp>
author Isaku Yamahata <yamahata@valinux.co.jp>
date Fri Jan 30 11:12:29 2009 +0900 (2009-01-30)
parents 97f8d6453fda
children e2ada9d65bca
line source
1 Title : How to do PCI Passthrough with VT-d
2 Authors : Allen Kay <allen.m.kay@intel.com>
3 Weidong Han <weidong.han@intel.com>
4 Yuji Shimada <shimada-yxb@necst.nec.co.jp>
5 Created : October-24-2007
6 Updated : September-09-2008
8 How to turn on VT-d in Xen
9 --------------------------
11 1 ) cd xen-unstable.hg
12 2 ) make install
13 3 ) make linux-2.6-xen-config CONFIGMODE=menuconfig
14 4 ) change XEN->"PCI-device backend driver" from "M" to "*".
15 5 ) make linux-2.6-xen-build
16 6 ) make linux-2.6-xen-install
17 7 ) depmod 2.6.18.8-xen
18 8 ) mkinitrd -v -f --with=ahci --with=aacraid --with=sd_mod --with=scsi_mod initrd-2.6.18-xen.img 2.6.18.8-xen
19 9 ) cp initrd-2.6.18-xen.img /boot
20 10) lspci - select the PCI BDF you want to assign to guest OS
21 11) "hide" pci device from dom0 as following sample grub entry:
23 title Xen-Fedora Core (2.6.18-xen)
24 root (hd0,0)
25 kernel /boot/xen.gz com1=115200,8n1 console=com1 iommu=1
26 module /boot/vmlinuz-2.6.18.8-xen root=LABEL=/ ro xencons=ttyS console=tty0 console=ttyS0, pciback.hide=(01:00.0)(03:00.0)
27 module /boot/initrd-2.6.18-xen.img
29 12) reboot system
30 13) add "pci" line in /etc/xen/hvm.conf for to assigned devices
31 pci = [ '01:00.0', '03:00.0' ]
32 15) start hvm guest and use "lspci" to see the passthru device and
33 "ifconfig" to see if IP address has been assigned to NIC devices.
36 Enable MSI/MSI-x for assigned devices
37 -------------------------------------
38 Add "msi=1" option in kernel line of host grub.
41 MSI-INTx translation for passthrough devices in HVM
42 ---------------------------------------------------
44 If the assigned device uses a physical IRQ that is shared by more than
45 one device among multiple domains, there may be significant impact on
46 device performance. Unfortunately, this is quite a common case if the
47 IO-APIC (INTx) IRQ is used. MSI can avoid this issue, but was only
48 available if the guest enables it.
50 With MSI-INTx translation turned on, Xen enables device MSI if it's
51 available, regardless of whether the guest uses INTx or MSI. If the
52 guest uses INTx IRQ, Xen will inject a translated INTx IRQ to guest's
53 virtual ioapic whenever an MSI message is received. This reduces the
54 interrupt sharing of the system. If the guest OS enables MSI or MSI-X,
55 the translation is automatically turned off.
57 To enable or disable MSI-INTx translation globally, add "pci_msitranslate"
58 in the config file:
59 pci_msitranslate = 1 (default is 1)
61 To override for a specific device:
62 pci = [ '01:00.0,msitranslate=0', '03:00.0' ]
65 Caveat on Conventional PCI Device Passthrough
66 ---------------------------------------------
68 VT-d spec specifies that all conventional PCI devices behind a
69 PCIe-to-PCI bridge have to be assigned to the same domain.
71 PCIe devices do not have this restriction.
74 VT-d Works on OS:
75 -----------------
77 1) Host OS: PAE, 64-bit
78 2) Guest OS: 32-bit, PAE, 64-bit
81 Combinations Tested:
82 --------------------
84 1) 64-bit host: 32/PAE/64 Linux/XP/Win2003/Vista guests
85 2) PAE host: 32/PAE Linux/XP/Win2003/Vista guests
88 VTd device hotplug:
89 -------------------
91 2 virtual PCI slots (6~7) are reserved in HVM guest to support VTd hotplug. If you have more VTd devices, only 2 of them can support hotplug. Usage is simple:
93 1. List the VTd device by dom. You can see a VTd device 0:2:0.0 is inserted in the HVM domain's PCI slot 6. '''lspci''' inside the guest should see the same.
95 [root@vt-vtd ~]# xm pci-list HVMDomainVtd
96 VSlt domain bus slot func
97 0x6 0x0 0x02 0x00 0x0
99 2. Detach the device from the guest by the physical BDF. Then HVM guest will receive a virtual PCI hot removal event to detach the physical device
101 [root@vt-vtd ~]# xm pci-detach HVMDomainVtd 0:2:0.0
103 3. Attach a PCI device to the guest by the physical BDF and desired virtual slot(optional). Following command would insert the physical device into guest's virtual slot 7
105 [root@vt-vtd ~]# xm pci-attach HVMDomainVtd 0:2:0.0 7
107 To specify options for the device, use -o or --options=. Following command would disable MSI-INTx translation for the device
109 [root@vt-vtd ~]# xm pci-attach -o msitranslate=0 0:2:0.0 7
112 VTd hotplug usage model:
113 ------------------------
115 * For live migration: As you know, VTd device would break the live migration as physical device can't be save/restored like virtual device. With hotplug, live migration is back again. Just hot remove all the VTd devices before live migration and hot add new VTd devices on target machine after live migration.
117 * VTd hotplug for device switch: VTd hotplug can be used to dynamically switch physical device between different HVM guest without shutdown.
120 VT-d Enabled Systems
121 --------------------
123 1) For VT-d enabling work on Xen, we have been using development
124 systems using following Intel motherboards:
125 - DQ35MP
126 - DQ35JO
128 2) As far as we know, following OEM systems also has vt-d enabled.
129 Feel free to add others as they become available.
131 - Dell: Optiplex 755
132 http://www.dell.com/content/products/category.aspx/optix?c=us&cs=555&l=en&s=biz
134 - HP Compaq: DC7800
135 http://h10010.www1.hp.com/wwpc/us/en/en/WF04a/12454-12454-64287-321860-3328898.html
137 For more information, pls refer to http://wiki.xensource.com/xenwiki/VTdHowTo.
140 Assigning devices to HVM domains
141 --------------------------------
143 Most device types such as NIC, HBA, EHCI and UHCI can be assigned to
144 an HVM domain.
146 But some devices have design features which make them unsuitable for
147 assignment to an HVM domain. Examples include:
149 * Device has an internal resource, such as private memory, which is
150 mapped to memory address space with BAR (Base Address Register).
151 * Driver submits command with a pointer to a buffer within internal
152 resource. Device decodes the pointer (address), and accesses to the
153 buffer.
155 In an HVM domain, the BAR is virtualized, and host-BAR value and
156 guest-BAR value are different. The addresses of internal resource from
157 device's view and driver's view are different. Similarly, the
158 addresses of buffer within internal resource from device's view and
159 driver's view are different. As a result, device can't access to the
160 buffer specified by driver.
162 Such devices assigned to HVM domain currently do not work.