view README.CD @ 1174:576575968828

bitkeeper revision 1.790 (4051df63N_qhNLzJhUL0q78WetTFRg)

Prevent transmitting link-local ARP packets on the wire.
author kaf24@scramble.cl.cam.ac.uk
date Fri Mar 12 16:03:47 2004 +0000 (2004-03-12)
parents 8870c27357dd
children a6e98694ed63
line source
1 #############################
2 __ __ _ ____
3 \ \/ /___ _ __ / | |___ \
4 \ // _ \ '_ \ | | __) |
5 / \ __/ | | | | |_ / __/
6 /_/\_\___|_| |_| |_(_)_____|
8 #############################
10 XenDemoCD 1.2
11 University of Cambridge Computer Laboratory
12 24 Jan 2004
14 http://www.cl.cam.ac.uk/netos/xen
16 Welcome to the Xen Demo CD!
18 Executive Summary
19 =================
21 This CD is a standalone demo of the Xen Virtual Machine Monitor (VMM)
22 and Linux-2.4 OS port (XenoLinux). It runs entirely off the CD,
23 without requiring hard disk installation. This is achieved using a RAM
24 disk to store mutable file system data while using the CD for
25 everything else. The CD can also be used for installing Xen/XenoLinux
26 to disk, and includes a source code snapshot along with all of the
27 tools required to build it.
29 Booting the CD
30 ==============
32 The Xen VMM is currently fairly h/w specific, but porting new device
33 drivers is relatively straightforward thanks to Xen's Linux driver
34 compatibility layer. The current snapshot supports the following
35 hardware:
37 CPU: Pentium Pro/II/III/IV/Xeon, Athlon (i.e. P6 or newer) SMP supported
38 IDE: Intel PIIX chipset, others will be PIO only (slow)
39 SCSI: Adaptec / Dell PERC Raid (aacraid), fusion MPT, megaraid, Adaptec aic7xxx
40 Net: Recommended: Intel e1000, Broadcom BCM57xx (tg3), 3c905 (3c59x)
41 Working, but require extra copies : pcnet32, Intel e100, tulip
43 Because of the demo CD's use of RAM disks, make sure you have plenty
44 of RAM (256MB+).
46 To try out the Demo, boot from CD (you may need to change your BIOS
47 configuration to do this), then select one of the four boot options
48 from the Grub menu:
50 Xen / linux-2.4.24
51 Xen / linux-2.4.24 using cmdline IP configuration
52 Xen / linux-2.4.24 in "safe mode"
53 linux-2.4.22
55 The last option is a plain linux kernel that runs on the bare machine,
56 and is included simply to help diagnose driver compatibility
57 problems. The "safe mode" boot option might be useful if you're having
58 problems getting Xen to work with your hardware, as it disables various
59 features such as SMP, and enables some debugging.
61 If you are going for a command line IP config, hit "e" at
62 the grub menu, then edit the "ip=" parameters to reflect your setup
63 e.g. "ip=<ipaddr>::<gateway>:<netmask>::eth0:off". It shouldn't be
64 necessary to set either the nfs server or hostname
65 parameters. Alternatively, once XenoLinux has booted you can login and
66 setup networking with 'dhclient' or 'ifconfig' and 'route' in the
67 normal way.
69 To make things easier for yourself, it's worth trying to arrange for an
70 IP address which is the first in a sequential range of free IP
71 addresses. It's useful to give each VM instance its own public IP
72 address (though it is possible to do NAT or use private addresses),
73 and the configuration files on the CD allocate IP addresses
74 sequentially for subsequent domains unless told otherwise.
76 After selecting the kernel to boot, stand back and watch Xen boot,
77 closely followed by "domain 0" running the XenoLinux kernel. The boot
78 messages can also sent to the serial line by specifying the baud rate
79 on the Xen cmdline (e.g., 'com1=9600,8n1'); this can be very useful
80 for debugging should anything important scroll off the screen. Xen's
81 startup messages will look quite familiar as much of the hardware
82 initialisation (SMP boot, apic setup) and device drivers are derived
83 from Linux.
85 If everything is well, you should see the linux rc scripts start a
86 bunch of standard services including sshd. Login on the console or
87 via ssh::
88 username: user root
89 password: xendemo xendemo
91 Once logged in, it should look just like any regular linux box. All
92 the usual tools and commands should work as per usual. However,
93 because of the poor random access performance of CD drives, the
94 machine will feel very slugish, and you may run out of memory if you
95 make significant modifications to the ramfs filesystem -- for the full
96 experience, install a Xen and XenoLinux image on you hard drive :-)
98 You can configure networking, either with 'dhclient' or manually via
99 'ifconfig' and 'route', remembering to edit /etc/resolv.conf if you
100 want DNS to work.
102 You can start an X server with 'startx'. It defaults to a conservative
103 1024x768, but you can edit the script for higher resoloutions. The CD
104 contains a load of standard software. You should be able to start
105 Apache, PostgreSQL, Mozilla etc in the normal way, but because
106 everything is running off CD the performance will be very sluggish and
107 you may run out of memory for the 'tmpfs' file system. You may wish
108 to go ahead and install Xen/XenoLinux on your hard drive, either
109 dropping Xen and the XenoLinux kernel down onto a pre-existing Linux
110 distribution, or using the file systems from the CD (which are based
111 on RH9). See the installation instructions later in this document.
113 If your video card requires 'agpgart' then it unfortunately won't yet
114 work with Xen, and you'll only be able to configure a VGA X
115 server. We're working on a fix for this for the next release.
117 If you want to browse the Xen / XenoLinux source, it's all located
118 under /usr/local/src/xeno-1.2, complete with BitKeeper
119 repository. We've also included source code and configuration
120 information for the various benchmarks we used in the SOSP paper.
123 Starting other domains
124 ======================
126 Xen's privileged control interfaces can be accessed using a C library
127 (libxc.so) or an easier-to-use Python wrapper module (Xc). Example
128 script templates are provided in tools/examples/.
130 Abyway, the first thing to do is to set up a window in which you will
131 receive console output from other domains. Console output will arrive
132 as UDP packets destined for The DemoCD's startup scripts
133 automatically bring up as an alias called eth0:xen (see
134 /etc/sysconfig/network-scripts/ifcfg-eth0 )
136 If you're not intending to configure the new domain with an IP address
137 on your LAN, then you'll probably want to use NAT. The
138 'xen_nat_enable' installs a few useful iptables rules into domain0 to
139 enable NAT. [ NB: The intention is that in future Xen will do NAT
140 itsel (actually RSIP), but this is part of a larger work package that
141 isn't ready to release.]
143 Next, run the xen UDP console displayer:
145 xen_read_console
147 [This is currently output only, but infuture we will have
148 bidirectional domain console]
150 Xen has a management interface that can be manipulated from domain0 to
151 create new domains, control their CPU, network and memory resource
152 allocations, allocate IP addresses, grant access to disk partitions,
153 and suspend/resume domains to files, etc. The management interface is
154 implemented as a set of library functions (implemented in C) for which
155 there are Python language bindings.
157 We have developed a simple set of example python tools for
158 manipulating the interface, with the intention that more sophisticated
159 high-level management tools will be developed in due course. Within
160 the source repository the tools live in tools/examples/ but are
161 installed in /usr/local/bin/ on the CD.
163 Starting a new domain is achieved using xc_dom_create.py which
164 allocates resources to a new domain, populates it with a kernel image
165 (and optionally a ramdisk) and then starts it.
167 It parses a configuration file written in the Python language, the
168 default location of which is "/etc/xc/defaults", but this may be
169 overridden with the "-f" option. For the Demo CD, the defaults file
170 will cause domains to be created with ram-based root file systems, and
171 mount their /usr partition from the CD, just like domain0. (If you are
172 writing your own config file, the "example" script may be a better
173 starting point)
175 Variables can be initialised and passed into configuration files. Some
176 of these may be compulsory, others optional.
178 The 'defaults' file on the CD requires the 'ip' variable to be set to
179 tell Xen what IP address(es) should be routed to this domain. Xen
180 will route packets to the domain if they bear one of these addresses
181 as a destination address, and will also ensure that packets sent from
182 the domain contain one of the addresses as a source address (to
183 prevent spoofing). If multiple IP addresses are to be assigned to a
184 domain they can be listed in a comma separated list (with no
185 whitespace).
187 The 'mem' variable can be used to change the default memory allocation
188 of 64MB. For example to start a domain with two IP addresses and
189 72MB:
191 xc_dom_create.py -Dip=, -Dmem=72
193 [multiple variables may also be set with a single '-D' flag by
194 separating them with ':'. Also, it's possible to use DNS hostnames
195 rather than IP addresses.]
197 When invoked with the '-n' option xc_dom_create.py will do a dry run
198 and just print out what resources and configuration the domain will
199 have e.g.:
201 [root@xendemo]# xc_dom_create.py -D ip=commando-1.xeno, -Dmem=100
202 Parsing config file 'defaults'
204 VM image : "/boot/xenolinux.gz"
205 VM ramdisk : "/boot/initrd.gz"
206 VM memory (MB) : "100"
207 VM IP address(es) : ""
208 VM block device(s) : "phy:cdrom,hdd,r"
209 VM cmdline : "ip= root=/dev/ram0 rw init=/linuxrc 4 LOCALIP="
211 If you invoke xc_dom_create.py without the '-n' option you should see
212 the domain booting on your xen_read_console window.
214 The 169.254.x.x network is special in that it is the 'link local'
215 subnet, and is isolated from the external network and hence can only
216 be used for communication between virtual machines. By convention, we
217 usually give each domain a link local address. The startup scripts on
218 the CD have been modified to accept a LINKLOCAL= parameter on the
219 kernel command line and initialise an IP alias accordingly (see
220 /etc/sysinit/network-scripts/ifcfg-eth0).
222 Linux only allows one IP address to be specified on the kernel command
223 line, so if you specify multiple IP addresses you'll need to configure
224 the new Linux VM with the other addresses manually (using ifconfig)
225 having logged in.
227 If you inspect the 'defaults' config script you'll see that the new
228 domain was started with a '4' on the kernel command line to tell
229 'init' to go to runlevel 4 rather than the default of 3 used by
230 domain0. This is done simply to suppress a bunch of harmless error
231 messages that would otherwise occur when the new (unprivileged) domain
232 tried to access physical hardware resources to try setting the
233 hwclock, system font, run gpm etc.
235 [root@commando-0 examples]# ./xc_dom_create.py -?
236 Usage: ./xc_dom_create.py <args>
238 This tool is used to create and start new domains. It reads defaults
239 from a file written in Python, having allowed variables to be set and
240 passed into the file. Further command line arguments allow the
241 defaults to be overridden. The defaults for each parameter are listed
242 in [] brackets. Arguments are as follows:
244 Arguments to control the parsing of the defaults file:
245 -f config_file -- Use the specified defaults script.
246 Default: ['/etc/xc/defaults']
247 -D foo=bar -- Set variable foo=bar before parsing config
248 E.g. '-D vmid=3:ip='
249 -h -- Print extended help message, including all arguments
250 -n -- Dry run only, don't actually create domain
252 The config file /etc/xc/defaults requires the following vars to be defined:
253 ip -- List of IP addr(s) for Xen to route to domain
254 e.g. -Dip=','
255 The following variables may be optionally defined:
256 mem -- Adjust initial memory allocation (default 64MB)
257 netmask -- Override gateway for kernel ip= command line
258 gateway -- Override network for kernel ip= command line
261 After it's booted, you should be able to ssh into your new domain from
262 domain0 using the link local 19.254.x.x address you assigned. If you
263 assigned a further IP address you should be able to ssh in using that
264 address too. If you ran the xen_enable_nat script, a bunch of port
265 redirects have been installed to enable you to ssh in to other domains
266 remotely even if you didn't assign an externally routeable address.
267 To access the new virtual machine remotely, use:
269 ssh -p2201 root@IP.address.Of.Domain0 # use 2202 for domain 2 etc.
271 You can manipulate running domains using the xc_dom_control.py tool.
272 Invoking it without arguments prints some usage information.
274 To see what domains are running, run 'xc_dom_control list'. Using the
275 tool you can change scheduling parameters, pause a domain, send it a
276 shutdown request, or blow it away with the 'destroy' command. You can
277 even suspend it to disk (but you probably won't have enough memory to
278 do the latter if you're running off the demo CD).
280 Usage: xc_dom_control [command] <params>
282 stop [dom] -- pause a domain
283 start [dom] -- un-pause a domain
284 shutdown [dom] -- request a domain to shutdown
285 destroy [dom] -- immediately terminate a domain
286 pincpu [dom] [cpu] -- pin a domain to the specified CPU
287 save [dom] [file] -- suspend a domain's memory to file
288 restore [file] -- resume a domain from a file
289 list -- print info about all domains
290 listvbds -- print info about all virtual block devs
291 cpu_bvtset [dom] [mcuadv] [warp] [warpl] [warpu]
292 -- set scheduling parameters for domain
293 cpu_bvtslice [slice] -- default scheduler slice
294 vif_stats [dom] [vif] -- get stats for a given network vif
295 vif_addip [dom] [vif] [ip] -- add an IP address to a given vif
296 vif_setsched [dom] [vif] [bytes] [usecs] -- rate limit vif bandwidth
297 vif_getsched [dom] [vif] -- print vif's scheduling parameters
298 vbd_add [dom] [uname] [dev] [mode] -- make disk/partition uname available to
299 domain as dev e.g. 'vbd_add phy:sda3 hda1 rw'
300 vbd_remove [dom] [dev] -- remove disk or partition attached as 'dev'
303 Troubleshooting Problems
304 ========================
306 If you have problems booting Xen, there are a number of boot parameters
307 that may be able to help diagnose problems:
309 ignorebiostables Disable parsing of BIOS-supplied tables. This may
310 help with some chipsets that aren't fully supported
311 by Xen. If you specify this option then ACPI tables are
312 also ignored, and SMP support is disabled.
314 noreboot Don't reboot the machine automatically on errors.
315 This is useful to catch debug output if you aren't
316 catching console messages via the serial line.
318 nosmp Disable SMP support.
319 This option is implied by 'ignorebiostables'.
321 noacpi Disable ACPI tables, which confuse Xen on some chipsets.
322 This option is implied by 'ignorebiostables'.
324 watchdog Enable NMI watchdog which can report certain failures.
326 noht Disable Hyperthreading.
328 ifname=ethXX Select which Ethernet interface to use.
330 ifname=dummy Don't use any network interface.
332 com1=<baud>,DPS[,<io_base>,<irq>]
333 com2=<baud>,DPS[,<io_base>,<irq>]
334 Xen supports up to two 16550-compatible serial ports.
335 For example: 'com1=9600,8n1,0x408,5' maps COM1 to a
336 9600-baud port, 8 data bits, no parity, 1 stop bit,
337 I/O port base 0x408, IRQ 5.
338 If the I/O base and IRQ are standard (com1:0x3f8,4;
339 com2:0x2f8,3) then they need not be specified.
341 console=<specifier list>
342 Specify the destination for Xen console I/O.
343 This is a comma-separated list of, for example:
344 vga: use VGA console and allow keyboard input
345 com1: use serial port com1
346 com2H: use serial port com2. Transmitted chars will
347 have the MSB set. Received chars must have
348 MSB set.
349 com2L: use serial port com2. Transmitted chars will
350 have the MSB cleared. Received chars must
351 have MSB cleared.
352 The latter two examples allow a single port to be
353 shared by two subsystems (eg. console and
354 debugger). Sharing is controlled by MSB of each
355 transmitted/received character.
356 [NB. Default for this option is 'com1,tty']
358 dom0_mem=xxx Set the initial amount of memory for domain0.
360 pdb=xxx Enable the pervasive debugger. See docs/pdb.txt
361 xxx defines how the gdb stub will communicate:
362 com1 use com1
363 com1H use com1 (with high bit set)
364 com2 use on com2
365 com2H use com2 (with high bit set)
367 It's probably a good idea to join the Xen developer's mailing list on
368 Sourceforge: http://lists.sourceforge.net/lists/listinfo/xen-devel
371 About The Xen Demo CD
372 =====================
374 The purpose of the Demo CD is to distribute a snapshot of Xen's
375 source, and simultaneously provide a convenient means for enabling
376 people to get experience playing with Xen without needing to install
377 it on their hard drive. If you decide to install Xen/XenoLinux you can
378 do so simply by following the installation instructions below -- which
379 essentially involves copying the contents of the CD on to a suitably
380 formated disk partition, and then installing or updating the Grub
381 bootloader.
383 This is a bootable CD that loads Xen, and then a Linux 2.4.22 OS image
384 ported to run on Xen. The CD contains a copy of a file system based on
385 the RedHat 9 distribution that is able to run directly off the CD
386 ("live ISO"), using a "tmpfs" RAM-based file system for root (/etc
387 /var etc). Changes you make to the tmpfs will obviously not be
388 persistent across reboots!
390 Because of the use of a RAM-based file system for root, you'll need
391 plenty of memory to run this CD -- something like 96MB per VM. This is
392 not a restriction of Xen : once you've installed Xen, XenoLinux and
393 the file system images on your hard drive you'll find you can boot VMs
394 in just a few MBs.
396 The CD contains a snapshot of the Xen and XenoLinux code base that we
397 believe to be pretty stable, but lacks some of the features that are
398 currently still work in progress e.g. OS suspend/resume to disk, and
399 various memory management enhancements to provide fast inter-OS
400 communication and sharing of memory pages between OSs. We'll release
401 newer snapshots as required, making use of a BitKeeper repository
402 hosted on http://xen.bkbits.net (follow instructions from the project
403 home page). We're obviously grateful to receive any bug fixes or
404 other code you can contribute. We suggest you join the
405 xen-devel@lists.sourceforge.net mailing list.
408 Installing from the CD
409 ======================
411 If you're installing Xen/XenoLinux onto an existing linux file system
412 distribution, just copy the Xen VMM (/boot/image.gz) and XenoLinux
413 kernels (/boot/xenolinux.gz), then modify the Grub config
414 (/boot/grub/menu.lst or /boot/grub/grub.conf) on the target system.
415 It should work on pretty much any distribution.
417 Xen is a "multiboot" standard boot image. Despite being a 'standard',
418 few boot loaders actually support it. The only two we know of are
419 Grub, and our modified version of linux kexec (for booting off a
420 XenoBoot CD -- PlanetLab have adopted the same boot CD approach).
422 If you need to install grub on your system, you can do so either by
423 building the Grub source tree
424 /usr/local/src/grub-0.93-iso9660-splashimage or by copying over all
425 the files in /boot/grub and then running /sbin/grub and following the
426 usual grub documentation. You'll then need to edit the Grub
427 config file.
429 A typical Grub menu option might look like:
431 title Xen / XenoLinux 2.4.22
432 kernel /boot/image.gz dom0_mem=131072 com1=115200,8n1 noht
433 module /boot/xenolinux.gz root=/dev/sda4 ro console=tty0
435 The first line specifies which Xen image to use, and what command line
436 arguments to pass to Xen. In this case we set the maximum amount of
437 memory to allocate to domain0, and enable serial I/O at 9600 baud.
438 We could also disable smp support (nosmp) or disable hyper-threading
439 support (noht). If you have multiple network interface you can use
440 ifname=ethXX to select which one to use. If your network card is
441 unsupported, use ifname=dummy
443 The second line specifies which xenolinux image to use, and the
444 standard linux command line arguments to pass to the kernel. In this
445 case, we're configuring the root partition and stating that it should
446 initially be mounted read-only (normal practice).
448 If we were booting with an initial ram disk (initrd), then this would
449 require a second "module" line.
451 Installing the Xen tools and source
452 ===================================
454 The tools and source live in the /usr/local/src/xen-1.2 directory on
455 the CD (and may also be downloaded from the project downloads
456 page). You'll need to copy them to some mutable storage before using
457 them.
459 If you have the BitKeeper BK tools installed you can check the
460 repository is up to date by cd'ing into the xeno-1.2.bk directory and
461 typing 'bk pull' (assuming you have an Internet connection).
463 You can rebuild Xen and the tools by typing 'make'. You can install
464 them to the standard directories with 'make install', or into the
465 ../install subtree with 'make dist'.
467 /usr/local/bin/xc_* the domain control tools
468 /lib/libxc.so the xc library
469 /usr/lib/python2.2/site-packages/XenoUtil.py python util library
470 /usr/lib/python2.2/site-packages/Xc.c python xc bindings
472 If you're using the virtual disk control tools (xc_vd_tool) you'll
473 need the SQLite library and python binding pysqlite. There's a tar
474 ball containing the necessary binaries on the project downloads page.
477 Modifying xc_mycreatelinuxdom1.py
478 =================================
480 xc_mycreatelinuxdom1.py.py can be used to set the new kernel's command line,
481 and hence determine what it uses as a root file system, etc. Although
482 the default is to boot in the same manner that domain0 did (using the
483 RAM-based file system for root and the CD for /usr) it's possible to
484 configure any of the following possibilities, for example:
486 * initrd=/boot/initrd init=/linuxrc
487 boot using an initial ram disk, executing /linuxrc (as per this CD)
489 * root=/dev/hda3 ro
490 boot using a standard hard disk partition as root
491 !!! remember to grant access in createlinuxdom.py.
493 * root=/dev/xvda1 ro
494 boot using a pre-configured 'virtual block device' that will be
495 attached to a virtual disk that previously has had a file system
496 installed on it.
498 * root=/dev/nfs nfsroot=/path/on/server ip=<blah_including server_IP>
499 Boot using an NFS mounted root file system. This could be from a
500 remote NFS server, or from an NFS server running in another
501 domain. The latter is rather a useful option.
503 A typical setup might be to allocate a standard disk partition for
504 each domain and populate it with files. To save space, having a shared
505 read-only usr partition might make sense.
507 Block devices should only be shared between domains in a read-only
508 fashion otherwise the linux kernels will obviously get very confused
509 as the file system structure may change underneath them (having the
510 same partition mounted rw twice is a sure fire way to cause
511 irreparable damage)! If you want read-write sharing, export the
512 directory to other domains via NFS from domain0.
517 Installing the file systems from the CD
518 =======================================
520 If you haven't got an existing Linux installation onto which you can
521 just drop down the Xen and XenoLinux images, then the file systems on
522 the CD provide a quick way of doing an install. However, you're
523 probably better off in the long run doing a proper Redhat, Fedora,
524 Debian etc install rather than just doing the hack described below:
526 Choose one or two partitions, depending on whether you want a separate
527 /usr or not. Make file systems on it/them e.g.:
528 mkfs -t ext3 /dev/hda3
529 [or mkfs -t ext2 /dev/hda3 && tune2fs -j /dev/hda3 if using an old
530 version of mkfs]
532 Next, mount the file system(s) e.g.:
533 mkdir /mnt/root && mount /dev/hda3 /mnt/root
534 [mkdir /mnt/usr && mount /dev/hda4 /mnt/usr]
536 To install the root file system, simply untar /usr/XenDemoCD/root.tar.gz:
537 cd /mnt/root && tar -zxpf /usr/XenDemoCD/root.tar.gz
539 You'll need to edit /mnt/root/etc/fstab to reflect your file system
540 configuration. Changing the password file (etc/shadow) is probably a
541 good idea too.
543 To install the usr file system, copy the file system from CD on /usr,
544 though leaving out the "XenDemoCD" and "boot" directories:
545 cd /usr && cp -a X11R6 etc java libexec root src bin dict kerberos local sbin tmp doc include lib man share /mnt/usr
547 If you intend to boot off these file systems (i.e. use them for
548 domain 0), then you probably want to copy the /usr/boot directory on
549 the cd over the top of the current symlink to /boot on your root
550 filesystem (after deleting the current symlink) i.e.:
551 cd /mnt/root ; rm boot ; cp -a /usr/boot .
553 The XenDemoCD directory is only useful if you want to build your own
554 version of the XenDemoCD (see below).
557 Debugging
558 =========
560 Xen has a set of debugging features that can be useful to try and
561 figure out what's going on. Hit 'h' on the serial line (if you
562 specified a baud rate on the Xen command line) or ScrollLock-h on the
563 keyboard to get a list of supported commands.
565 If you have a crash you'll likely get a crash dump containing an EIP
566 (PC) which, along with an 'objdump -d image', can be useful in
567 figuring out what's happened. Debug a XenoLinux image just as you
568 would any other Linux kernel.
570 We supply a handy debug terminal program which you can find in
571 /usr/local/src/xen-1.0/xeno-1.0.bk/tools/misc/miniterm/
572 This should be built and executed on another machine that is connected
573 via a null modem cable. Documentation is included.
574 Alternatively, telnet can be used in 'char mode' if the Xen machine is
575 connected to a serial-port server.
578 Installing Xen / XenoLinux on a RedHat distribution
579 ===================================================
581 When using Xen / Xenolinux on a standard Linux distribution there are
582 a couple of things to watch out for:
584 The first Linux VM that is started when Xen boots start (Domain 0) is
585 given direct access to the graphics card, so it may use it as a
586 console. Other domains don't have ttyN consoles, so attempts to run a
587 'mingetty' against them will fail, generating periodic warning
588 messages from 'init' about services respawning too fast. They should
589 work for domain0 just fine.
591 In future, we may make the current 'xencons' accept input as well as
592 output, so that a getty can be run against it. In the meantime, other
593 domains don't have a console suitable for logging in on, so you'll
594 have to run sshd and ssh in to them.
596 To prevent the warning messages you'll need to remove them from
597 /etc/inittab for domains>0. Due to a bug in the RH9 /etc/rc.sysinit
598 script #'ing the lines out of /etc/inittab won't work as it ignores
599 the '#' and tries to access them anyway.
601 Also, because domains>0 don't have any privileged access at all,
602 certain commands in the default boot sequence will fail e.g. attempts
603 to update the hwclock, change the console font, update the keytable
604 map, start apmd (power management), or gpm (mouse cursor). Either
605 ignore the errors, or remove them from the startup scripts. Deleting
606 the following links are a good start: S24pcmcia S09isdn S17keytable
607 S26apmd S85gpm
609 If you want to use a single root file system that works cleanly for
610 domain0 and domains>0, one trick is to use different 'init' run
611 levels. For example, on the Xen Demo CD we use run level 3 for domain
612 0, and run level 4 for domains>0. This enables different startup
613 scripts to be run in depending on the run level number passed on the
614 kernel command line.
616 Xenolinux kernels can be built to use runtime loadable modules just
617 like normal linux kernels. Modules should be installed under
618 /lib/modules in the normal way.
620 If there's some kernel feature that hasn't been built into our default
621 kernel, there's a pretty good change that if its a non-hardware
622 related option you'll just be able to enable it and rebuild. If its
623 not on the xconfig menu, hack the arch/xeno/config.in to put the menu
624 back in.
626 If you're going to use the link local 169.254.1.x addresses to
627 communicate between VMs, there are a couple of other issues to watch
628 out for. RH9 appears to have a bug where by default it configures the
629 loopback interface with a 169.254 address, which stops it working
630 properly on eth0 for communicating with other domains.
632 This utterly daft RH9 behaviour can be stopped by appending
633 "NOZEROCONF=yes" to /etc/sysconfig/networking-scripts/ifcfg-lo
635 If you're going to use NFS root files systems mounted either from an
636 external server or from domain0 there are a couple of other gotchas.
637 The default /etc/sysconfig/iptables rules block NFS, so part way
638 through the boot sequence things will suddenly go dead.
640 If you're planning on having a separate NFS /usr partition, the RH9
641 boot scripts don't make life easy, as they attempt to mount NFS file
642 systems way to late in the boot process. The easiest way I found to do
643 this was to have a '/linuxrc' script run ahead of /sbin/init that
644 mounts /usr:
645 #!/bin/bash
646 /sbin/ipconfig lo
647 /sbin/portmap
648 /bin/mount /usr
649 exec /sbin/init "$@" <>/dev/console 2>&1
651 The one slight complication with the above is that /sbib/portmap is
652 dynamically linked against /usr/lib/libwrap.so.0 Since this is in
653 /usr, it won't work. I solved this by copying the file (and link)
654 below the /usr mount point, and just let the file be 'covered' when
655 the mount happens.
657 In some installations, where a shared read-only /usr is being used, it
658 may be desirable to move other large directories over into the
659 read-only /usr. For example, on the XenDemoCD we replace /bin /lib and
660 /sbin with links into /usr/root/bin /usr/root/lib and /usr/root/sbin
661 respectively. This creates other problems for running the /linuxrc
662 script, requiring bash, portmap, mount, ifconfig, and a handful of
663 other shared libraries to be copied below the mount point. I guess I
664 should have written a little statically linked C program...
668 Description of how the XenDemoCD boots
669 ======================================
671 1. Grub is used to load Xen, a XenoLinux kernel, and an initrd (initial
672 ram disk). [The source of the version of Grub used is in /usr/local/src]
674 2. the init=/linuxrc command line causes linux to execute /linuxrc in
675 the initrd.
677 3. the /linuxrc file attempts to mount the CD by trying the likely
678 locations : /dev/hd[abcd].
680 4. it then creates a 'tmpfs' file system and untars the
681 'XenDemoCD/root.tar.gz' file into the tmpfs. This contains hopefully
682 all the files that need to be mutable (this would be so much easier
683 if Linux supported 'stacked' or union file systems...)
685 5. Next, /linuxrc uses the pivot_root call to change the root file
686 system to the tmpfs, with the CD mounted as /usr.
688 6. It then invokes /sbin/init in the tmpfs and the boot proceeds
689 normally.
692 Building your own version of the XenDemoCD
693 ==========================================
695 The 'live ISO' version of RedHat is based heavily on Peter Anvin's
696 SuperRescue CD version 2.1.2 and J. McDaniel's Plan-B:
698 http://www.kernel.org/pub/dist/superrescue/v2/
699 http://projectplanb.org/
701 Since Xen uses a "multiboot" image format, it was necessary to change
702 the bootloader from isolinux to Grub0.93 with Leonid Lisovskiy's
703 <lly@pisem.net> grub.0.93-iso9660.patch
705 The Xen Demo CD contains all of the build scripts that were used to
706 create it, so it is possible to 'unpack' the current iso, modifiy it,
707 then build a new iso. The procedure for doing so is as follows:
709 First, mount either the CD, or the iso image of the CD:
711 mount /dev/cdrom /mnt/cdrom
712 or:
713 mount -o loop xendemo-1.0.iso /mnt/cdrom
715 cd to the directory you want to 'unpack' the iso into then run the
716 unpack script:
718 cd /local/xendemocd
719 /mnt/cdrom/XenDemoCD/unpack-iso.sh
721 The result is a 'build' directory containing the file system tree
722 under the 'root' directory. e.g. /local/xendemocd/build/root
724 To add or remove rpms, its possible to use 'rpm' with the --root
725 option to set the path. For more complex changes, it easiest to boot a
726 machine using using the tree via NFS root. Before doing this, you'll
727 need to edit fstab to comment out the seperate mount of /usr.
729 One thing to watch out for: as part of the CD build process, the
730 contents of the 'rootpatch' tree gets copied over the existing 'root'
731 tree replacing various files. The intention of the rootpatch tree is
732 to contain the files that have been modified from the original RH
733 distribution (e.g. various /etc files). This was done to make it
734 easier to upgrade to newer RH versions in the future. The downside of
735 this is that if you edit an existing file in the root tree you should
736 check that you don't also need to propagate the change to the
737 rootpatch tree to avoid it being overwritten.
739 Once you've made the changes and want to build a new iso, here's the
740 procedure:
742 cd /local/xendemocd/build
743 echo '<put_your_name_here>' > Builder
744 ./make.sh put_your_version_id_here >../buildlog 2>&1
746 This process can take 30 mins even on a fast machine, but you should
747 eventually end up with an iso image in the build directory.
749 Notes:
751 root - the root of the file system heirarchy as presented to the
752 running system
754 rootpatch - contains files that have been modified from the standard
755 RH, and copied over the root tree as part of the build
756 procedure.
758 irtree - the file system tree that will go into the initrd (initial
759 ram disk)
761 work - a working directory used in the build process
763 usr - this should really be in 'work' as its created as part of the
764 build process. It contains the 'immutable' files that will
765 be served from the CD rather than the tmpfs containing the
766 contents of root.tar.gz. Some files that are normally in /etc
767 or /var that are large and actually unlikely to need changing
768 have been moved into /usr/root and replaced with links.
771 Ian Pratt
772 9 Sep 2003