ia64/xen-unstable

changeset 2890:4b5799ad3285

bitkeeper revision 1.1159.164.2 (418a845cg_s7Z9mx8bsKUubfm7gUSw)

final tweaks - should be done now
author smh22@tempest.cl.cam.ac.uk
date Thu Nov 04 19:34:52 2004 +0000 (2004-11-04)
parents 8002342a47e9
children 59c1b7c5785d
files docs/src/user.tex
line diff
     1.1 --- a/docs/src/user.tex	Thu Nov 04 19:10:39 2004 +0000
     1.2 +++ b/docs/src/user.tex	Thu Nov 04 19:34:52 2004 +0000
     1.3 @@ -218,6 +218,7 @@ running on a P6-class (or newer) CPU.
     1.4  \item [$*$] Development installation of zlib (e.g., zlib-dev).
     1.5  \item [$*$] Development installation of Python v2.2 or later (e.g., python-dev).
     1.6  \item [$*$] \LaTeX, transfig and tgif are required to build the documentation.
     1.7 +\item [$\dag$] The \path{iproute2} package. 
     1.8  \item [$\dag$] The Linux bridge-utils\footnote{Available from 
     1.9  {\tt http://bridge.sourceforge.net}} (e.g., \path{/sbin/brctl})
    1.10  \item [$\dag$] An installation of Twisted v1.3 or
    1.11 @@ -965,18 +966,36 @@ chapter covers some of the possibilities
    1.12  
    1.13  \section{Exporting Physical Devices as VBDs} 
    1.14  
    1.15 -\framebox{\centerline{\bf Warning: Block device sharing} \\
    1.16 +One of the simplest configurations is to directly export 
    1.17 +individual partitions from domain 0 to other domains. To 
    1.18 +achieve this use the \path{phy:} specifier in your domain 
    1.19 +configuration file. For example a line like
    1.20 +\begin{quote}
    1.21 +\verb_disk = ['phy:hda3,sda1,w']_
    1.22 +\end{quote}
    1.23 +specifies that the partition \path{/dev/hda3} in domain 0 
    1.24 +should be exported to the new domain as \path{/dev/sda1}; 
    1.25 +one could equally well export it as \path{/dev/hda3} or 
    1.26 +\path{/dev/sdb5} should one wish. 
    1.27 +
    1.28 +In addition to local disks and partitions, it is possible to export
    1.29 +any device that Linux considers to be ``a disk'' in the same manner.
    1.30 +For example, if you have iSCSI disks or GNBD volumes imported into
    1.31 +domain 0 you can export these to other domains using the \path{phy:}
    1.32 +disk syntax.
    1.33 +
    1.34 +
    1.35 +\begin{center}
    1.36 +\framebox{\bf Warning: Block device sharing}
    1.37 +\end{center}
    1.38 +\begin{quote}
    1.39  Block devices should only be shared between domains in a read-only
    1.40  fashion otherwise the Linux kernels will obviously get very confused
    1.41  as the file system structure may change underneath them (having the
    1.42  same partition mounted rw twice is a sure fire way to cause
    1.43  irreparable damage)!  If you want read-write sharing, export the
    1.44 -directory to other domains via NFS from domain0.}
    1.45 -
    1.46 -In addition to local disks, its possible to export any device
    1.47 -that Linux knows about as a disk in another domain. For example,
    1.48 -if you have iSCSI disks or GNBD volumes imported into domain 0
    1.49 -you can export these to other domains using the "phy:" disk syntax.
    1.50 +directory to other domains via NFS from domain0. 
    1.51 +\end{quote}
    1.52  
    1.53  
    1.54  \section{Using File-backed VBDs}
    1.55 @@ -990,31 +1009,41 @@ takes up half of the size allocated.
    1.56  
    1.57  For example, to create a 2GB sparse file-backed virtual block device
    1.58  (actually only consumes 1KB of disk):
    1.59 -
    1.60 +\begin{quote}
    1.61  \verb_# dd if=/dev/zero of=vm1disk bs=1k seek=2048k count=1_
    1.62 -
    1.63 -Make a file system in the disk file: \\
    1.64 +\end{quote}
    1.65 +
    1.66 +Make a file system in the disk file: 
    1.67 +\begin{quote}
    1.68  \verb_# mkfs -t ext3 vm1disk_
    1.69 +\end{quote}
    1.70  
    1.71  (when the tool asks for confirmation, answer `y')
    1.72  
    1.73  Populate the file system e.g. by copying from the current root:
    1.74 +\begin{quote}
    1.75  \begin{verbatim}
    1.76  # mount -o loop vm1disk /mnt
    1.77  # cp -ax /{root,dev,var,etc,usr,bin,sbin,lib} /mnt
    1.78  # mkdir /mnt/{proc,sys,home,tmp}
    1.79  \end{verbatim}
    1.80 +\end{quote}
    1.81 +
    1.82  Tailor the file system by editing \path{/etc/fstab},
    1.83  \path{/etc/hostname}, etc (don't forget to edit the files in the
    1.84  mounted file system, instead of your domain 0 filesystem, e.g. you
    1.85  would edit \path{/mnt/etc/fstab} instead of \path{/etc/fstab} ).  For
    1.86  this example put \path{/dev/sda1} to root in fstab.
    1.87  
    1.88 -Now unmount (this is important!):\\
    1.89 +Now unmount (this is important!):
    1.90 +\begin{quote}
    1.91  \verb_# umount /mnt_
    1.92 -
    1.93 -In the configuration file set:\\
    1.94 +\end{quote}
    1.95 +
    1.96 +In the configuration file set:
    1.97 +\begin{quote}
    1.98  \verb_disk = ['file:/full/path/to/vm1disk,sda1,w']_
    1.99 +\end{quote}
   1.100  
   1.101  As the virtual machine writes to its `disk', the sparse file will be
   1.102  filled in and consume more space up to the original 2GB.
   1.103 @@ -1022,29 +1051,54 @@ filled in and consume more space up to t
   1.104  
   1.105  \section{Using LVM-backed VBDs}
   1.106  
   1.107 -initialise a partition to LVM volumes:
   1.108 - pvcreate /dev/sda10		
   1.109 -
   1.110 -Create a volume group named 'vg' on the physical partition:
   1.111 - vgcreate vg /dev/sda10
   1.112 -
   1.113 -Create a logical volume of size 4GB named 'myvmdisk1':
   1.114 - lvcreate -L4096M -n myvmdisk1 vg
   1.115 -
   1.116 -You should now see that you have a /dev/vg/myvmdisk1
   1.117 -Make a filesystem, mount it and populate it. e.g.:
   1.118 - mkfs -t ext3 /dev/vg/myvmdisk1
   1.119 - mount /dev/vg/myvmdisk1 /mnt
   1.120 - cp -ax / /mnt
   1.121 - umount /mnt
   1.122 -
   1.123 -Now configure your VM with the following disk configuration
   1.124 +A particularly appealing solution is to use LVM volumes 
   1.125 +as backing for domain file-systems since this allows dynamic
   1.126 +growing/shrinking of volumes as well as snapshot and other 
   1.127 +features. 
   1.128 +
   1.129 +To initialise a partition to support LVM volumes:
   1.130 +\begin{quote}
   1.131 +\begin{verbatim} 
   1.132 +# pvcreate /dev/sda10		
   1.133 +\end{verbatim} 
   1.134 +\end{quote}
   1.135 +
   1.136 +Create a volume group named `vg' on the physical partition:
   1.137 +\begin{quote}
   1.138 +\begin{verbatim} 
   1.139 +# vgcreate vg /dev/sda10
   1.140 +\end{verbatim} 
   1.141 +\end{quote}
   1.142 +
   1.143 +Create a logical volume of size 4GB named `myvmdisk1':
   1.144 +\begin{quote}
   1.145 +\begin{verbatim} 
   1.146 +# lvcreate -L4096M -n myvmdisk1 vg
   1.147 +\end{verbatim} 
   1.148 +\end{quote}
   1.149 +
   1.150 +You should now see that you have a \path{/dev/vg/myvmdisk1}
   1.151 +Make a filesystem, mount it and populate it, e.g.:
   1.152 +\begin{quote}
   1.153 +\begin{verbatim} 
   1.154 +# mkfs -t ext3 /dev/vg/myvmdisk1
   1.155 +# mount /dev/vg/myvmdisk1 /mnt
   1.156 +# cp -ax / /mnt
   1.157 +# umount /mnt
   1.158 +\end{verbatim} 
   1.159 +\end{quote}
   1.160 +
   1.161 +Now configure your VM with the following disk configuration:
   1.162 +\begin{quote}
   1.163 +\begin{verbatim} 
   1.164   disk = [ 'phy:vg/myvmdisk1,sda1,w' ]
   1.165 -
   1.166 -LVM enables you to grow the size logical volumes, but you'll need
   1.167 +\end{verbatim} 
   1.168 +\end{quote}
   1.169 +
   1.170 +LVM enables you to grow the size of logical volumes, but you'll need
   1.171  to resize the corresponding file system to make use of the new
   1.172 -space. Some file systems (e.g. ext3) now support on-line resize.
   1.173 -See the LVM manuals for more details.
   1.174 +space. Some file systems (e.g. ext3) now support on-line resize.  See
   1.175 +the LVM manuals for more details.
   1.176  
   1.177  You can also use LVM for creating copy-on-write clones of LVM
   1.178  volumes (known as writable persistent snapshots in LVM
   1.179 @@ -1057,59 +1111,71 @@ will improve in future.
   1.180  To create two copy-on-write clone of the above file system you
   1.181  would use the following commands:
   1.182  
   1.183 - lvcreate -s -L1024M -n myclonedisk1 /dev/vg/myvmdisk1
   1.184 - lvcreate -s -L1024M -n myclonedisk2 /dev/vg/myvmdisk1
   1.185 +\begin{quote}
   1.186 +\begin{verbatim} 
   1.187 +# lvcreate -s -L1024M -n myclonedisk1 /dev/vg/myvmdisk1
   1.188 +# lvcreate -s -L1024M -n myclonedisk2 /dev/vg/myvmdisk1
   1.189 +\end{verbatim} 
   1.190 +\end{quote}
   1.191  
   1.192  Each of these can grow to have 1GB of differences from the master
   1.193  volume. You can grow the amount of space for storing the
   1.194 -differences using the lvextend command e.g.:
   1.195 - lvextend +100M /dev/vg/myclonedisk1
   1.196 -
   1.197 -Don't let the differences volume ever fill up otherwise LVM gets
   1.198 +differences using the lvextend command, e.g.:
   1.199 +\begin{quote}
   1.200 +\begin{verbatim} 
   1.201 +# lvextend +100M /dev/vg/myclonedisk1
   1.202 +\end{verbatim} 
   1.203 +\end{quote}
   1.204 +
   1.205 +Don't let the `differences volume' ever fill up otherwise LVM gets
   1.206  rather confused. It may be possible to automate the growing
   1.207 -process by using 'dmsetup wait' to spot the volume getting full
   1.208 -and then issue an lvextend.
   1.209 -
   1.210 -In principle, it is possible to continue writing to the volume
   1.211 -that has been cloned (the changes will not be visible to the
   1.212 -clones), but we wouldn't recommend this: have the cloned volume
   1.213 -as a 'pristine' file system install that isn't mounted directly
   1.214 -by any of the virtual machines.
   1.215 +process by using \path{dmsetup wait} to spot the volume getting full
   1.216 +and then issue an \path{lvextend}.
   1.217 +
   1.218 +%% In principle, it is possible to continue writing to the volume
   1.219 +%% that has been cloned (the changes will not be visible to the
   1.220 +%% clones), but we wouldn't recommend this: have the cloned volume
   1.221 +%% as a 'pristine' file system install that isn't mounted directly
   1.222 +%% by any of the virtual machines.
   1.223  
   1.224  
   1.225  \section{Using NFS Root}
   1.226  
   1.227 -The procedure for using NFS root in a virtual machine is basically the
   1.228 -same as you would follow for a real machine.  NB. the Linux NFS root
   1.229 -implementation is known to have stability problems under high load
   1.230 -(this is not a Xen-specific problem), so this configuration may not be
   1.231 -appropriate for critical servers.
   1.232 -
   1.233 -First, populate a root filesystem in a directory on the server machine
   1.234 ---- this can be on another physical machine, or perhaps just another
   1.235 -virtual machine on the same node.
   1.236 -
   1.237 -Now, configure the NFS server to export this filesystem over the
   1.238 -network by adding a line to /etc/exports, for instance:
   1.239 -
   1.240 +First, populate a root filesystem in a directory on the server
   1.241 +machine. This can be on a distinct physical machine, or simply 
   1.242 +run within a virtual machine on the same node.
   1.243 +
   1.244 +Now configure the NFS server to export this filesystem over the
   1.245 +network by adding a line to \path{/etc/exports}, for instance:
   1.246 +
   1.247 +\begin{quote}
   1.248  \begin{verbatim}
   1.249  /export/vm1root      w.x.y.z/m (rw,sync,no_root_squash)
   1.250  \end{verbatim}
   1.251 +\end{quote}
   1.252  
   1.253  Finally, configure the domain to use NFS root.  In addition to the
   1.254  normal variables, you should make sure to set the following values in
   1.255  the domain's configuration file:
   1.256  
   1.257 +\begin{quote}
   1.258 +\begin{small}
   1.259  \begin{verbatim}
   1.260  root       = '/dev/nfs'
   1.261 -nfs_server = 'a.b.c.d'       # Substitute the IP for the server here
   1.262 -nfs_root   = '/path/to/root' # Path to root FS on the server
   1.263 +nfs_server = 'a.b.c.d'       # substitute IP address of server 
   1.264 +nfs_root   = '/path/to/root' # path to root FS on the server
   1.265  \end{verbatim}
   1.266 -
   1.267 -The domain will need network access at boot-time, so either statically
   1.268 -configure an IP address (Using the config variables {\tt ip}, {\tt
   1.269 -netmask}, {\tt gateway}, {\tt hostname}) or enable DHCP ({\tt
   1.270 -dhcp='dhcp'}).
   1.271 +\end{small}
   1.272 +\end{quote}
   1.273 +
   1.274 +The domain will need network access at boot time, so either statically
   1.275 +configure an IP address (Using the config variables \path{ip}, 
   1.276 +\path{netmask}, \path{gateway}, \path{hostname}) or enable DHCP (
   1.277 +\path{dhcp='dhcp'}).
   1.278 +
   1.279 +Note that the Linux NFS root implementation is known to have stability
   1.280 +problems under high load (this is not a Xen-specific problem), so this
   1.281 +configuration may not be appropriate for critical servers.
   1.282  
   1.283  
   1.284  \part{User Reference Documentation}
   1.285 @@ -1254,7 +1320,9 @@ vif = [ 'mac=aa:00:00:00:00:11, bridge=x
   1.286  \item[disk] List of block devices to export to the domain,  e.g. \\
   1.287    \verb_disk = [ 'phy:hda1,sda1,r' ]_ \\
   1.288    exports physical device \path{/dev/hda1} to the domain 
   1.289 -  as \path{/dev/sda1} with read-only access. 
   1.290 +  as \path{/dev/sda1} with read-only access. Exporting a disk read-write 
   1.291 +  which is currently mounted is dangerous -- if you are \emph{certain}
   1.292 +  you wish to do this, you can specify \path{w!} as the mode. 
   1.293  \item[dhcp] Set to {\tt 'dhcp'} if you want to use DHCP to configure
   1.294    networking. 
   1.295  \item[netmask] Manually configured IP netmask.
   1.296 @@ -1341,12 +1409,12 @@ according to the type of virtual device 
   1.297  %% existing {\em virtual} devices (of the appropriate type) to that
   1.298  %% backend.
   1.299  
   1.300 -Note that a block backend cannot import virtual block devices from
   1.301 -other domains, and a network backend cannot import virtual network
   1.302 -devices from other domains.  Thus (particularly in the case of block
   1.303 -backends, which cannot import a virtual block device as their root
   1.304 -filesystem), you may need to boot a backend domain from a ramdisk or a
   1.305 -network device.
   1.306 +Note that a block backend cannot currently import virtual block
   1.307 +devices from other domains, and a network backend cannot import
   1.308 +virtual network devices from other domains.  Thus (particularly in the
   1.309 +case of block backends, which cannot import a virtual block device as
   1.310 +their root filesystem), you may need to boot a backend domain from a
   1.311 +ramdisk or a network device.
   1.312  
   1.313  Access to PCI devices may be configured on a per-device basis.  Xen
   1.314  will assign the minimal set of hardware privileges to a domain that