changeset 8881:90ebc45e1bd8

Update documentation to describe new PCI front/back drivers.

Update the documentation to include the syntax of "hiding" a PCI
device from domain 0 and for specifying the assignment of a PCI device
to a driver domain. It also includes a brief section exploring some of
the security concerns that driver domains address and mentioning some
of those that remain.

Signed-off-by: Ryan Wilson <hap9@epoch.ncsc.mil>
author kaf24@firebug.cl.cam.ac.uk
date Thu Feb 16 23:47:58 2006 +0100 (2006-02-16)
parents 7c720ccec00a
children 3faa7f3ef8ac
files docs/src/user.tex
line diff
     1.1 --- a/docs/src/user.tex	Thu Feb 16 23:46:51 2006 +0100
     1.2 +++ b/docs/src/user.tex	Thu Feb 16 23:47:58 2006 +0100
     1.3 @@ -1191,6 +1191,65 @@ For more complex network setups (e.g.\ w
     1.4  integrate with existing bridges) these scripts may be replaced with
     1.5  customized variants for your site's preferred configuration.
     1.7 +\section{Driver Domain Configuration}
     1.8 +\label{s:ddconf}
     1.9 +
    1.10 +\subsection{PCI}
    1.11 +\label{ss:pcidd}
    1.12 +
    1.13 +Individual PCI devices can be assigned to a given domain to allow that
    1.14 +domain direct access to the PCI hardware. To use this functionality, ensure
    1.15 +that the PCI Backend is compiled in to a privileged domain (e.g. domain 0)
    1.16 +and that the domains which will be assigned PCI devices have the PCI Frontend
    1.17 +compiled in. In XenLinux, the PCI Backend is available under the Xen
    1.18 +configuration section while the PCI Frontend is under the
    1.19 +architecture-specific "Bus Options" section. You may compile both the backend
    1.20 +and the frontend into the same kernel; they will not affect each other.
    1.21 +
    1.22 +The PCI devices you wish to assign to unprivileged domains must be "hidden"
    1.23 +from your backend domain (usually domain 0) so that it does not load a driver
    1.24 +for them. Use the \path{pciback.hide} kernel parameter which is specified on
    1.25 +the kernel command-line and is configurable through GRUB (see
    1.26 +Section~\ref{s:configure}). Note that devices are not really hidden from the
    1.27 +backend domain. The PCI Backend ensures that no other device driver loads
    1.28 +for those devices. PCI devices are identified by hexadecimal
    1.29 +slot/funciton numbers (on Linux, use \path{lspci} to determine slot/funciton
    1.30 +numbers of your devices) and can be specified with or without the PCI domain: \\
    1.31 +\centerline{  {\tt ({\em bus}:{\em slot}.{\em func})} example {\tt (02:1d.3)}} \\
    1.32 +\centerline{  {\tt ({\em domain}:{\em bus}:{\em slot}.{\em func})} example {\tt (0000:02:1d.3)}} \\
    1.33 +
    1.34 +An example kernel command-line which hides two PCI devices might be: \\
    1.35 +\centerline{ {\tt root=/dev/sda4 ro console=tty0 pciback.hide=(02:01.f)(0000:04:1d.0) } } \\
    1.36 +
    1.37 +To configure a domU to receive a PCI device:
    1.38 +
    1.39 +\begin{description}
    1.40 +\item[Command-line:]
    1.41 +  Use the {\em pci} command-line flag. For multiple devices, use the option
    1.42 +  multiple times. \\
    1.43 +\centerline{  {\tt xm create netcard-dd pci=01:00.0 pci=02:03.0 }} \\
    1.44 +
    1.45 +\item[Flat Format configuration file:]
    1.46 +  Specify all of your PCI devices in a python list named {\em pci}. \\
    1.47 +\centerline{  {\tt pci=['01:00.0','02:03.0'] }} \\
    1.48 +
    1.49 +\item[SXP Format configuration file:]
    1.50 +  Use a single PCI device section for all of your devices (specify the numbers
    1.51 +  in hexadecimal with the preceding '0x'). Note that {\em domain} here refers
    1.52 +  to the PCI domain, not a virtual machine within Xen.
    1.53 +{\small
    1.54 +\begin{verbatim}
    1.55 +(device (pci
    1.56 +    (dev (domain 0x0)(bus 0x3)(slot 0x1a)(func 0x1)
    1.57 +    (dev (domain 0x0)(bus 0x1)(slot 0x5)(func 0x0)
    1.58 +)
    1.59 +\end{verbatim}
    1.60 +}
    1.61 +\end{description}
    1.62 +
    1.63 +There are a number of security concerns associated with PCI Driver Domains
    1.64 +that you can read about in Section~\ref{s:ddsecurity}.
    1.65 +
    1.66  %% There are two possible types of privileges: IO privileges and
    1.67  %% administration privileges.
    1.69 @@ -1596,6 +1655,63 @@ set of best practices for Domain-0:
    1.70    of a kernel exploit making all of your domains vulnerable.
    1.71  \end{enumerate}
    1.73 +\section{Driver Domain Security Considerations}
    1.74 +\label{s:ddsecurity}
    1.75 +
    1.76 +Driver domains address a range of security problems that exist regarding
    1.77 +the use of device drivers and hardware. On many operating systems in common
    1.78 +use today, device drivers run within the kernel with the same privileges as
    1.79 +the kernel. Few or no mechanisms exist to protect the integrity of the kernel
    1.80 +from a misbehaving (read "buggy") or malicious device driver. Driver
    1.81 +domains exist to aid in isolating a device driver within its own virtual
    1.82 +machine where it cannot affect the stability and integrity of other
    1.83 +domains. If a driver crashes, the driver domain can be restarted rather than
    1.84 +have the entire machine crash (and restart) with it. Drivers written by
    1.85 +unknown or untrusted third-parties can be confined to an isolated space.
    1.86 +Driver domains thus address a number of security and stability issues with
    1.87 +device drivers.
    1.88 +
    1.89 +However, due to limitations in current hardware, a number of security
    1.90 +concerns remain that need to be considered when setting up driver domains (it
    1.91 +should be noted that the following list is not intended to be exhaustive).
    1.92 +
    1.93 +\begin{enumerate}
    1.94 +\item \textbf{Without an IOMMU, a hardware device can DMA to memory regions
    1.95 +  outside of its controlling domain.} Architectures which do not have an
    1.96 +  IOMMU (e.g. most x86-based platforms) to restrict DMA usage by hardware
    1.97 +  are vulnerable. A hardware device which can perform arbitrary memory reads
    1.98 +  and writes can read/write outside of the memory of its controlling domain.
    1.99 +  A malicious or misbehaving domain could use a hardware device it controls
   1.100 +  to send data overwriting memory in another domain or to read arbitrary
   1.101 +  regions of memory in another domain.
   1.102 +\item \textbf{Shared buses are vulnerable to sniffing.} Devices that share
   1.103 +  a data bus can sniff (and possible spoof) each others' data. Device A that
   1.104 +  is assigned to Domain A could eavesdrop on data being transmitted by
   1.105 +  Domain B to Device B and then relay that data back to Domain A.
   1.106 +\item \textbf{Devices which share interrupt lines can either prevent the
   1.107 +  reception of that interrupt by the driver domain or can trigger the
   1.108 +  interrupt service routine of that guest needlessly.} A devices which shares
   1.109 +  a level-triggered interrupt (e.g. PCI devices) with another device can
   1.110 +  raise an interrupt and never clear it. This effectively blocks other devices
   1.111 +  which share that interrupt line from notifying their controlling driver
   1.112 +  domains that they need to be serviced. A device which shares an
   1.113 +  any type of interrupt line can trigger its interrupt continually which
   1.114 +  forces execution time to be spent (in multiple guests) in the interrupt
   1.115 +  service routine (potentially denying time to other processes within that
   1.116 +  guest). System architectures which allow each device to have its own
   1.117 +  interrupt line (e.g. PCI's Message Signaled Interrupts) are less
   1.118 +  vulnerable to this denial-of-service problem.
   1.119 +\item \textbf{Devices may share the use of I/O memory address space.} Xen can
   1.120 +  only restrict access to a device's physical I/O resources at a certain
   1.121 +  granularity. For interrupt lines and I/O port address space, that
   1.122 +  granularity is very fine (per interrupt line and per I/O port). However,
   1.123 +  Xen can only restrict access to I/O memory address space on a page size
   1.124 +  basis. If more than one device shares use of a page in I/O memory address
   1.125 +  space, the domains to which those devices are assigned will be able to
   1.126 +  access the I/O memory address space of each other's devices.
   1.127 +\end{enumerate}
   1.128 +
   1.129 +
   1.130  \section{Security Scenarios}