ia64/xen-unstable

view README.CD @ 1100:7a9e36d29982

bitkeeper revision 1.733 (4033c881640VDHdXFsEr2MkEnXSo5w)

ide-disk.c, apic.c:
Remove noisy output in Xen. Make IDE startup more informative.
author kaf24@scramble.cl.cam.ac.uk
date Wed Feb 18 20:18:09 2004 +0000 (2004-02-18)
parents d261b3bfa00c
children 8870c27357dd
line source
1 #############################
2 __ __ _ ____
3 \ \/ /___ _ __ / | |___ \
4 \ // _ \ '_ \ | | __) |
5 / \ __/ | | | | |_ / __/
6 /_/\_\___|_| |_| |_(_)_____|
8 #############################
10 XenDemoCD 1.2
11 University of Cambridge Computer Laboratory
12 24 Jan 2004
14 http://www.cl.cam.ac.uk/netos/xen
16 Welcome to the Xen Demo CD!
18 Executive Summary
19 =================
21 This CD is a standalone demo of the Xen Virtual Machine Monitor (VMM)
22 and Linux-2.4 OS port (XenoLinux). It runs entirely off the CD,
23 without requiring hard disk installation. This is achieved using a RAM
24 disk to store mutable file system data while using the CD for
25 everything else. The CD can also be used for installing Xen/XenoLinux
26 to disk, and includes a source code snapshot along with all of the
27 tools required to build it.
29 Booting the CD
30 ==============
32 The Xen VMM is currently fairly h/w specific, but porting new device
33 drivers is relatively straightforward thanks to Xen's Linux driver
34 compatibility layer. The current snapshot supports the following
35 hardware:
37 CPU: Pentium Pro/II/III/IV/Xeon, Athlon (i.e. P6 or newer) SMP supported
38 IDE: Intel PIIX chipset, others will be PIO only (slow)
39 SCSI: Adaptec / Dell PERC Raid (aacraid), fusion MPT, megaraid, Adaptec aic7xxx
40 Net: Recommended: Intel e1000, Broadcom BCM57xx (tg3), 3c905 (3c59x)
41 Working, but require extra copies : pcnet32, Intel e100, tulip
43 Because of the demo CD's use of RAM disks, make sure you have plenty
44 of RAM (256MB+).
46 To try out the Demo, boot from CD (you may need to change your BIOS
47 configuration to do this), then select one of the four boot options
48 from the Grub menu:
50 Xen / linux-2.4.24
51 Xen / linux-2.4.24 using cmdline IP configuration
52 Xen / linux-2.4.24 in "safe mode"
53 linux-2.4.22
55 The last option is a plain linux kernel that runs on the bare machine,
56 and is included simply to help diagnose driver compatibility
57 problems. The "safe mode" boot option might be useful if you're having
58 problems getting Xen to work with your hardware, as it disables various
59 features such as SMP, and enables some debugging.
61 If you are going for a command line IP config, hit "e" at
62 the grub menu, then edit the "ip=" parameters to reflect your setup
63 e.g. "ip=<ipaddr>::<gateway>:<netmask>::eth0:off". It shouldn't be
64 necessary to set either the nfs server or hostname
65 parameters. Alternatively, once XenoLinux has booted you can login and
66 setup networking with 'dhclient' or 'ifconfig' and 'route' in the
67 normal way.
69 To make things easier for yourself, it's worth trying to arrange for an
70 IP address which is the first in a sequential range of free IP
71 addresses. It's useful to give each VM instance its own public IP
72 address (though it is possible to do NAT or use private addresses),
73 and the configuration files on the CD allocate IP addresses
74 sequentially for subsequent domains unless told otherwise.
76 After selecting the kernel to boot, stand back and watch Xen boot,
77 closely followed by "domain 0" running the XenoLinux kernel. The boot
78 messages can also sent to the serial line by specifying the baud rate
79 on the Xen cmdline (e.g., 'ser_baud=9600'); this can be very useful
80 for debugging should anything important scroll off the screen. Xen's
81 startup messages will look quite familiar as much of the hardware
82 initialisation (SMP boot, apic setup) and device drivers are derived
83 from Linux.
85 If everything is well, you should see the linux rc scripts start a
86 bunch of standard services including sshd. Login on the console or
87 via ssh::
88 username: user root
89 password: xendemo xendemo
91 Once logged in, it should look just like any regular linux box. All
92 the usual tools and commands should work as per usual. However,
93 because of the poor random access performance of CD drives, the
94 machine will feel very slugish, and you may run out of memory if you
95 make significant modifications to the ramfs filesystem -- for the full
96 experience, install a Xen and XenoLinux image on you hard drive :-)
98 You can configure networking, either with 'dhclient' or manually via
99 'ifconfig' and 'route', remembering to edit /etc/resolv.conf if you
100 want DNS to work.
102 You can start an X server with 'startx'. It defaults to a conservative
103 1024x768, but you can edit the script for higher resoloutions. The CD
104 contains a load of standard software. You should be able to start
105 Apache, PostgreSQL, Mozilla etc in the normal way, but because
106 everything is running off CD the performance will be very sluggish and
107 you may run out of memory for the 'tmpfs' file system. You may wish
108 to go ahead and install Xen/XenoLinux on your hard drive, either
109 dropping Xen and the XenoLinux kernel down onto a pre-existing Linux
110 distribution, or using the file systems from the CD (which are based
111 on RH9). See the installation instructions later in this document.
113 If your video card requires 'agpgart' then it unfortunately won't yet
114 work with Xen, and you'll only be able to configure a VGA X
115 server. We're working on a fix for this for the next release.
117 If you want to browse the Xen / XenoLinux source, it's all located
118 under /usr/local/src/xeno-1.2, complete with BitKeeper
119 repository. We've also included source code and configuration
120 information for the various benchmarks we used in the SOSP paper.
123 Starting other domains
124 ======================
126 Xen's privileged control interfaces can be accessed using a C library
127 (libxc.so) or an easier-to-use Python wrapper module (Xc). Example
128 script templates are provided in tools/examples/.
130 Abyway, the first thing to do is to set up a window in which you will
131 receive console output from other domains. Console output will arrive
132 as UDP packets destined for 169.254.1.0. The DemoCD's startup scripts
133 automatically bring up 169.254.1.0 as an alias called eth0:xen (see
134 /etc/sysconfig/network-scripts/ifcfg-eth0 )
136 If you're not intending to configure the new domain with an IP address
137 on your LAN, then you'll probably want to use NAT. The
138 'xen_nat_enable' installs a few useful iptables rules into domain0 to
139 enable NAT. [ NB: The intention is that in future Xen will do NAT
140 itsel (actually RSIP), but this is part of a larger work package that
141 isn't ready to release.]
143 Next, run the xen UDP console displayer:
145 xen_read_console
147 [This is currently output only, but infuture we will have
148 bidirectional domain console]
150 Xen has a management interface that can be manipulated from domain0 to
151 create new domains, control their CPU, network and memory resource
152 allocations, allocate IP addresses, grant access to disk partitions,
153 and suspend/resume domains to files, etc. The management interface is
154 implemented as a set of library functions (implemented in C) for which
155 there are Python language bindings.
157 We have developed a simple set of example python tools for
158 manipulating the interface, with the intention that more sophisticated
159 high-level management tools will be developed in due course. Within
160 the source repository the tools live in tools/examples/ but are
161 installed in /usr/local/bin/ on the CD.
163 Starting a new domain is achieved using xc_dom_create.py which
164 allocates resources to a new domain, populates it with a kernel image
165 (and optionally a ramdisk) and then starts it.
167 It parses a configuration file written in the Python language, the
168 default location of which is "/etc/xc/defaults", but this may be
169 overridden with the "-f" option. For the Demo CD, the defaults file
170 will cause domains to be created with ram-based root file systems, and
171 mount their /usr partition from the CD, just like domain0. (If you are
172 writing your own config file, the "example" script may be a better
173 starting point)
175 Variables can be initialised and passed into configuration files. Some
176 of these may be compulsory, others optional.
178 The 'defaults' file on the CD requires the 'ip' variable to be set to
179 tell Xen what IP address(es) should be routed to this domain. Xen
180 will route packets to the domain if they bear one of these addresses
181 as a destination address, and will also ensure that packets sent from
182 the domain contain one of the addresses as a source address (to
183 prevent spoofing). If multiple IP addresses are to be assigned to a
184 domain they can be listed in a comma separated list (with no
185 whitespace).
187 The 'mem' variable can be used to change the default memory allocation
188 of 64MB. For example to start a domain with two IP addresses and
189 72MB:
191 xc_dom_create.py -Dip=128.23.45.34,169.254.1.1 -Dmem=72
193 [multiple variables may also be set with a single '-D' flag by
194 separating them with ':'. Also, it's possible to use DNS hostnames
195 rather than IP addresses.]
197 When invoked with the '-n' option xc_dom_create.py will do a dry run
198 and just print out what resources and configuration the domain will
199 have e.g.:
201 [root@xendemo]# xc_dom_create.py -D ip=commando-1.xeno,169.254.2.3 -Dmem=100
202 Parsing config file 'defaults'
204 VM image : "/boot/xenolinux.gz"
205 VM ramdisk : "/boot/initrd.gz"
206 VM memory (MB) : "100"
207 VM IP address(es) : "128.232.38.51:169.254.2.3"
208 VM block device(s) : "phy:cdrom,hdd,r"
209 VM cmdline : "ip=128.232.38.51:169.254.1.0:128.232.32.1:255.255.240.0::eth0:off root=/dev/ram0 rw init=/linuxrc 4 LOCALIP=169.254.2.3"
211 If you invoke xc_dom_create.py without the '-n' option you should see
212 the domain booting on your xen_read_console window.
214 The 169.254.x.x network is special in that it is the 'link local'
215 subnet, and is isolated from the external network and hence can only
216 be used for communication between virtual machines. By convention, we
217 usually give each domain a link local address. The startup scripts on
218 the CD have been modified to accept a LINKLOCAL= parameter on the
219 kernel command line and initialise an IP alias accordingly (see
220 /etc/sysinit/network-scripts/ifcfg-eth0).
222 Linux only allows one IP address to be specified on the kernel command
223 line, so if you specify multiple IP addresses you'll need to configure
224 the new Linux VM with the other addresses manually (using ifconfig)
225 having logged in.
227 If you inspect the 'defaults' config script you'll see that the new
228 domain was started with a '4' on the kernel command line to tell
229 'init' to go to runlevel 4 rather than the default of 3 used by
230 domain0. This is done simply to suppress a bunch of harmless error
231 messages that would otherwise occur when the new (unprivileged) domain
232 tried to access physical hardware resources to try setting the
233 hwclock, system font, run gpm etc.
235 [root@commando-0 examples]# ./xc_dom_create.py -?
236 Usage: ./xc_dom_create.py <args>
238 This tool is used to create and start new domains. It reads defaults
239 from a file written in Python, having allowed variables to be set and
240 passed into the file. Further command line arguments allow the
241 defaults to be overridden. The defaults for each parameter are listed
242 in [] brackets. Arguments are as follows:
244 Arguments to control the parsing of the defaults file:
245 -f config_file -- Use the specified defaults script.
246 Default: ['/etc/xc/defaults']
247 -D foo=bar -- Set variable foo=bar before parsing config
248 E.g. '-D vmid=3:ip=1.2.3.4'
249 -h -- Print extended help message, including all arguments
250 -n -- Dry run only, don't actually create domain
252 The config file /etc/xc/defaults requires the following vars to be defined:
253 ip -- List of IP addr(s) for Xen to route to domain
254 e.g. -Dip='1.2.3.4,5.6.7.8'
255 The following variables may be optionally defined:
256 mem -- Adjust initial memory allocation (default 64MB)
257 netmask -- Override gateway for kernel ip= command line
258 gateway -- Override network for kernel ip= command line
261 After it's booted, you should be able to ssh into your new domain from
262 domain0 using the link local 19.254.x.x address you assigned. If you
263 assigned a further IP address you should be able to ssh in using that
264 address too. If you ran the xen_enable_nat script, a bunch of port
265 redirects have been installed to enable you to ssh in to other domains
266 remotely even if you didn't assign an externally routeable address.
267 To access the new virtual machine remotely, use:
269 ssh -p2201 root@IP.address.Of.Domain0 # use 2202 for domain 2 etc.
271 You can manipulate running domains using the xc_dom_control.py tool.
272 Invoking it without arguments prints some usage information.
274 To see what domains are running, run 'xc_dom_control list'. Using the
275 tool you can change scheduling parameters, pause a domain, send it a
276 shutdown request, or blow it away with the 'destroy' command. You can
277 even suspend it to disk (but you probably won't have enough memory to
278 do the latter if you're running off the demo CD).
280 Usage: xc_dom_control [command] <params>
282 stop [dom] -- pause a domain
283 start [dom] -- un-pause a domain
284 shutdown [dom] -- request a domain to shutdown
285 destroy [dom] -- immediately terminate a domain
286 pincpu [dom] [cpu] -- pin a domain to the specified CPU
287 save [dom] [file] -- suspend a domain's memory to file
288 restore [file] -- resume a domain from a file
289 list -- print info about all domains
290 listvbds -- print info about all virtual block devs
291 cpu_bvtset [dom] [mcuadv] [warp] [warpl] [warpu]
292 -- set scheduling parameters for domain
293 cpu_bvtslice [slice] -- default scheduler slice
294 vif_stats [dom] [vif] -- get stats for a given network vif
295 vif_addip [dom] [vif] [ip] -- add an IP address to a given vif
296 vif_setsched [dom] [vif] [bytes] [usecs] -- rate limit vif bandwidth
297 vif_getsched [dom] [vif] -- print vif's scheduling parameters
298 vbd_add [dom] [uname] [dev] [mode] -- make disk/partition uname available to
299 domain as dev e.g. 'vbd_add phy:sda3 hda1 rw'
300 vbd_remove [dom] [dev] -- remove disk or partition attached as 'dev'
303 Troubleshooting Problems
304 ========================
306 If you have problems booting Xen, there are a number of boot parameters
307 that may be able to help diagnose problems:
309 ignorebiostables Disable parsing of BIOS-supplied tables. This may
310 help with some chipsets that aren't fully supported
311 by Xen. If you specify this option then ACPI tables are
312 also ignored, and SMP support is disabled.
314 noreboot Don't reboot the machine automatically on errors.
315 This is useful to catch debug output if you aren't
316 catching console messages via the serial line.
318 nosmp Disable SMP support.
319 This option is implied by 'ignorebiostables'.
321 noacpi Disable ACPI tables, which confuse Xen on some chipsets.
322 This option is implied by 'ignorebiostables'.
324 watchdog Enable NMI watchdog which can report certain failures.
326 noht Disable Hyperthreading.
328 ifname=ethXX Select which Ethernet interface to use.
330 ifname=dummy Don't use any network interface.
332 ser_baud=xxx Enable serial I/O and set the baud rate (COM1)
334 dom0_mem=xxx Set the initial amount of memory for domain0.
336 pdb=xxx Enable the pervasive debugger. See docs/pdb.txt
337 xxx defines how the gdb stub will communicate:
338 com1 use com1
339 com1H use com1 (with high bit set)
340 com2 use on com2
341 com2H use com2 (with high bit set)
343 It's probably a good idea to join the Xen developer's mailing list on
344 Sourceforge: http://lists.sourceforge.net/lists/listinfo/xen-devel
347 About The Xen Demo CD
348 =====================
350 The purpose of the Demo CD is to distribute a snapshot of Xen's
351 source, and simultaneously provide a convenient means for enabling
352 people to get experience playing with Xen without needing to install
353 it on their hard drive. If you decide to install Xen/XenoLinux you can
354 do so simply by following the installation instructions below -- which
355 essentially involves copying the contents of the CD on to a suitably
356 formated disk partition, and then installing or updating the Grub
357 bootloader.
359 This is a bootable CD that loads Xen, and then a Linux 2.4.22 OS image
360 ported to run on Xen. The CD contains a copy of a file system based on
361 the RedHat 9 distribution that is able to run directly off the CD
362 ("live ISO"), using a "tmpfs" RAM-based file system for root (/etc
363 /var etc). Changes you make to the tmpfs will obviously not be
364 persistent across reboots!
366 Because of the use of a RAM-based file system for root, you'll need
367 plenty of memory to run this CD -- something like 96MB per VM. This is
368 not a restriction of Xen : once you've installed Xen, XenoLinux and
369 the file system images on your hard drive you'll find you can boot VMs
370 in just a few MBs.
372 The CD contains a snapshot of the Xen and XenoLinux code base that we
373 believe to be pretty stable, but lacks some of the features that are
374 currently still work in progress e.g. OS suspend/resume to disk, and
375 various memory management enhancements to provide fast inter-OS
376 communication and sharing of memory pages between OSs. We'll release
377 newer snapshots as required, making use of a BitKeeper repository
378 hosted on http://xen.bkbits.net (follow instructions from the project
379 home page). We're obviously grateful to receive any bug fixes or
380 other code you can contribute. We suggest you join the
381 xen-devel@lists.sourceforge.net mailing list.
384 Installing from the CD
385 ======================
387 If you're installing Xen/XenoLinux onto an existing linux file system
388 distribution, just copy the Xen VMM (/boot/image.gz) and XenoLinux
389 kernels (/boot/xenolinux.gz), then modify the Grub config
390 (/boot/grub/menu.lst or /boot/grub/grub.conf) on the target system.
391 It should work on pretty much any distribution.
393 Xen is a "multiboot" standard boot image. Despite being a 'standard',
394 few boot loaders actually support it. The only two we know of are
395 Grub, and our modified version of linux kexec (for booting off a
396 XenoBoot CD -- PlanetLab have adopted the same boot CD approach).
398 If you need to install grub on your system, you can do so either by
399 building the Grub source tree
400 /usr/local/src/grub-0.93-iso9660-splashimage or by copying over all
401 the files in /boot/grub and then running /sbin/grub and following the
402 usual grub documentation. You'll then need to edit the Grub
403 config file.
405 A typical Grub menu option might look like:
407 title Xen / XenoLinux 2.4.22
408 kernel /boot/image.gz dom0_mem=131072 ser_baud=115200 noht
409 module /boot/xenolinux.gz root=/dev/sda4 ro console=tty0
411 The first line specifies which Xen image to use, and what command line
412 arguments to pass to Xen. In this case we set the maximum amount of
413 memory to allocate to domain0, and enable serial I/O at 9600 baud.
414 We could also disable smp support (nosmp) or disable hyper-threading
415 support (noht). If you have multiple network interface you can use
416 ifname=ethXX to select which one to use. If your network card is
417 unsupported, use ifname=dummy
419 The second line specifies which xenolinux image to use, and the
420 standard linux command line arguments to pass to the kernel. In this
421 case, we're configuring the root partition and stating that it should
422 initially be mounted read-only (normal practice).
424 If we were booting with an initial ram disk (initrd), then this would
425 require a second "module" line.
427 Installing the Xen tools and source
428 ===================================
430 The tools and source live in the /usr/local/src/xen-1.2 directory on
431 the CD (and may also be downloaded from the project downloads
432 page). You'll need to copy them to some mutable storage before using
433 them.
435 If you have the BitKeeper BK tools installed you can check the
436 repository is up to date by cd'ing into the xeno-1.2.bk directory and
437 typing 'bk pull' (assuming you have an Internet connection).
439 You can rebuild Xen and the tools by typing 'make'. You can install
440 them to the standard directories with 'make install', or into the
441 ../install subtree with 'make dist'.
443 /usr/local/bin/xc_* the domain control tools
444 /lib/libxc.so the xc library
445 /usr/lib/python2.2/site-packages/XenoUtil.py python util library
446 /usr/lib/python2.2/site-packages/Xc.c python xc bindings
448 If you're using the virtual disk control tools (xc_vd_tool) you'll
449 need the SQLite library and python binding pysqlite. There's a tar
450 ball containing the necessary binaries on the project downloads page.
453 Modifying xc_mycreatelinuxdom1.py
454 =================================
456 xc_mycreatelinuxdom1.py.py can be used to set the new kernel's command line,
457 and hence determine what it uses as a root file system, etc. Although
458 the default is to boot in the same manner that domain0 did (using the
459 RAM-based file system for root and the CD for /usr) it's possible to
460 configure any of the following possibilities, for example:
462 * initrd=/boot/initrd init=/linuxrc
463 boot using an initial ram disk, executing /linuxrc (as per this CD)
465 * root=/dev/hda3 ro
466 boot using a standard hard disk partition as root
467 !!! remember to grant access in createlinuxdom.py.
469 * root=/dev/xvda1 ro
470 boot using a pre-configured 'virtual block device' that will be
471 attached to a virtual disk that previously has had a file system
472 installed on it.
474 * root=/dev/nfs nfsroot=/path/on/server ip=<blah_including server_IP>
475 Boot using an NFS mounted root file system. This could be from a
476 remote NFS server, or from an NFS server running in another
477 domain. The latter is rather a useful option.
479 A typical setup might be to allocate a standard disk partition for
480 each domain and populate it with files. To save space, having a shared
481 read-only usr partition might make sense.
483 Block devices should only be shared between domains in a read-only
484 fashion otherwise the linux kernels will obviously get very confused
485 as the file system structure may change underneath them (having the
486 same partition mounted rw twice is a sure fire way to cause
487 irreparable damage)! If you want read-write sharing, export the
488 directory to other domains via NFS from domain0.
493 Installing the file systems from the CD
494 =======================================
496 If you haven't got an existing Linux installation onto which you can
497 just drop down the Xen and XenoLinux images, then the file systems on
498 the CD provide a quick way of doing an install. However, you're
499 probably better off in the long run doing a proper Redhat, Fedora,
500 Debian etc install rather than just doing the hack described below:
502 Choose one or two partitions, depending on whether you want a separate
503 /usr or not. Make file systems on it/them e.g.:
504 mkfs -t ext3 /dev/hda3
505 [or mkfs -t ext2 /dev/hda3 && tune2fs -j /dev/hda3 if using an old
506 version of mkfs]
508 Next, mount the file system(s) e.g.:
509 mkdir /mnt/root && mount /dev/hda3 /mnt/root
510 [mkdir /mnt/usr && mount /dev/hda4 /mnt/usr]
512 To install the root file system, simply untar /usr/XenDemoCD/root.tar.gz:
513 cd /mnt/root && tar -zxpf /usr/XenDemoCD/root.tar.gz
515 You'll need to edit /mnt/root/etc/fstab to reflect your file system
516 configuration. Changing the password file (etc/shadow) is probably a
517 good idea too.
519 To install the usr file system, copy the file system from CD on /usr,
520 though leaving out the "XenDemoCD" and "boot" directories:
521 cd /usr && cp -a X11R6 etc java libexec root src bin dict kerberos local sbin tmp doc include lib man share /mnt/usr
523 If you intend to boot off these file systems (i.e. use them for
524 domain 0), then you probably want to copy the /usr/boot directory on
525 the cd over the top of the current symlink to /boot on your root
526 filesystem (after deleting the current symlink) i.e.:
527 cd /mnt/root ; rm boot ; cp -a /usr/boot .
529 The XenDemoCD directory is only useful if you want to build your own
530 version of the XenDemoCD (see below).
533 Debugging
534 =========
536 Xen has a set of debugging features that can be useful to try and
537 figure out what's going on. Hit 'h' on the serial line (if you
538 specified a baud rate on the Xen command line) or ScrollLock-h on the
539 keyboard to get a list of supported commands.
541 If you have a crash you'll likely get a crash dump containing an EIP
542 (PC) which, along with an 'objdump -d image', can be useful in
543 figuring out what's happened. Debug a XenoLinux image just as you
544 would any other Linux kernel.
546 We supply a handy debug terminal program which you can find in
547 /usr/local/src/xen-1.0/xeno-1.0.bk/tools/misc/miniterm/
548 This should be built and executed on another machine that is connected
549 via a null modem cable. Documentation is included.
550 Alternatively, telnet can be used in 'char mode' if the Xen machine is
551 connected to a serial-port server.
554 Installing Xen / XenoLinux on a RedHat distribution
555 ===================================================
557 When using Xen / Xenolinux on a standard Linux distribution there are
558 a couple of things to watch out for:
560 The first Linux VM that is started when Xen boots start (Domain 0) is
561 given direct access to the graphics card, so it may use it as a
562 console. Other domains don't have ttyN consoles, so attempts to run a
563 'mingetty' against them will fail, generating periodic warning
564 messages from 'init' about services respawning too fast. They should
565 work for domain0 just fine.
567 In future, we may make the current 'xencons' accept input as well as
568 output, so that a getty can be run against it. In the meantime, other
569 domains don't have a console suitable for logging in on, so you'll
570 have to run sshd and ssh in to them.
572 To prevent the warning messages you'll need to remove them from
573 /etc/inittab for domains>0. Due to a bug in the RH9 /etc/rc.sysinit
574 script #'ing the lines out of /etc/inittab won't work as it ignores
575 the '#' and tries to access them anyway.
577 Also, because domains>0 don't have any privileged access at all,
578 certain commands in the default boot sequence will fail e.g. attempts
579 to update the hwclock, change the console font, update the keytable
580 map, start apmd (power management), or gpm (mouse cursor). Either
581 ignore the errors, or remove them from the startup scripts. Deleting
582 the following links are a good start: S24pcmcia S09isdn S17keytable
583 S26apmd S85gpm
585 If you want to use a single root file system that works cleanly for
586 domain0 and domains>0, one trick is to use different 'init' run
587 levels. For example, on the Xen Demo CD we use run level 3 for domain
588 0, and run level 4 for domains>0. This enables different startup
589 scripts to be run in depending on the run level number passed on the
590 kernel command line.
592 Xenolinux kernels can be built to use runtime loadable modules just
593 like normal linux kernels. Modules should be installed under
594 /lib/modules in the normal way.
596 If there's some kernel feature that hasn't been built into our default
597 kernel, there's a pretty good change that if its a non-hardware
598 related option you'll just be able to enable it and rebuild. If its
599 not on the xconfig menu, hack the arch/xeno/config.in to put the menu
600 back in.
602 If you're going to use the link local 169.254.1.x addresses to
603 communicate between VMs, there are a couple of other issues to watch
604 out for. RH9 appears to have a bug where by default it configures the
605 loopback interface with a 169.254 address, which stops it working
606 properly on eth0 for communicating with other domains.
608 This utterly daft RH9 behaviour can be stopped by appending
609 "NOZEROCONF=yes" to /etc/sysconfig/networking-scripts/ifcfg-lo
611 If you're going to use NFS root files systems mounted either from an
612 external server or from domain0 there are a couple of other gotchas.
613 The default /etc/sysconfig/iptables rules block NFS, so part way
614 through the boot sequence things will suddenly go dead.
616 If you're planning on having a separate NFS /usr partition, the RH9
617 boot scripts don't make life easy, as they attempt to mount NFS file
618 systems way to late in the boot process. The easiest way I found to do
619 this was to have a '/linuxrc' script run ahead of /sbin/init that
620 mounts /usr:
621 #!/bin/bash
622 /sbin/ipconfig lo 127.0.0.1
623 /sbin/portmap
624 /bin/mount /usr
625 exec /sbin/init "$@" <>/dev/console 2>&1
627 The one slight complication with the above is that /sbib/portmap is
628 dynamically linked against /usr/lib/libwrap.so.0 Since this is in
629 /usr, it won't work. I solved this by copying the file (and link)
630 below the /usr mount point, and just let the file be 'covered' when
631 the mount happens.
633 In some installations, where a shared read-only /usr is being used, it
634 may be desirable to move other large directories over into the
635 read-only /usr. For example, on the XenDemoCD we replace /bin /lib and
636 /sbin with links into /usr/root/bin /usr/root/lib and /usr/root/sbin
637 respectively. This creates other problems for running the /linuxrc
638 script, requiring bash, portmap, mount, ifconfig, and a handful of
639 other shared libraries to be copied below the mount point. I guess I
640 should have written a little statically linked C program...
644 Description of how the XenDemoCD boots
645 ======================================
647 1. Grub is used to load Xen, a XenoLinux kernel, and an initrd (initial
648 ram disk). [The source of the version of Grub used is in /usr/local/src]
650 2. the init=/linuxrc command line causes linux to execute /linuxrc in
651 the initrd.
653 3. the /linuxrc file attempts to mount the CD by trying the likely
654 locations : /dev/hd[abcd].
656 4. it then creates a 'tmpfs' file system and untars the
657 'XenDemoCD/root.tar.gz' file into the tmpfs. This contains hopefully
658 all the files that need to be mutable (this would be so much easier
659 if Linux supported 'stacked' or union file systems...)
661 5. Next, /linuxrc uses the pivot_root call to change the root file
662 system to the tmpfs, with the CD mounted as /usr.
664 6. It then invokes /sbin/init in the tmpfs and the boot proceeds
665 normally.
668 Building your own version of the XenDemoCD
669 ==========================================
671 The 'live ISO' version of RedHat is based heavily on Peter Anvin's
672 SuperRescue CD version 2.1.2 and J. McDaniel's Plan-B:
674 http://www.kernel.org/pub/dist/superrescue/v2/
675 http://projectplanb.org/
677 Since Xen uses a "multiboot" image format, it was necessary to change
678 the bootloader from isolinux to Grub0.93 with Leonid Lisovskiy's
679 <lly@pisem.net> grub.0.93-iso9660.patch
681 The Xen Demo CD contains all of the build scripts that were used to
682 create it, so it is possible to 'unpack' the current iso, modifiy it,
683 then build a new iso. The procedure for doing so is as follows:
685 First, mount either the CD, or the iso image of the CD:
687 mount /dev/cdrom /mnt/cdrom
688 or:
689 mount -o loop xendemo-1.0.iso /mnt/cdrom
691 cd to the directory you want to 'unpack' the iso into then run the
692 unpack script:
694 cd /local/xendemocd
695 /mnt/cdrom/XenDemoCD/unpack-iso.sh
697 The result is a 'build' directory containing the file system tree
698 under the 'root' directory. e.g. /local/xendemocd/build/root
700 To add or remove rpms, its possible to use 'rpm' with the --root
701 option to set the path. For more complex changes, it easiest to boot a
702 machine using using the tree via NFS root. Before doing this, you'll
703 need to edit fstab to comment out the seperate mount of /usr.
705 One thing to watch out for: as part of the CD build process, the
706 contents of the 'rootpatch' tree gets copied over the existing 'root'
707 tree replacing various files. The intention of the rootpatch tree is
708 to contain the files that have been modified from the original RH
709 distribution (e.g. various /etc files). This was done to make it
710 easier to upgrade to newer RH versions in the future. The downside of
711 this is that if you edit an existing file in the root tree you should
712 check that you don't also need to propagate the change to the
713 rootpatch tree to avoid it being overwritten.
715 Once you've made the changes and want to build a new iso, here's the
716 procedure:
718 cd /local/xendemocd/build
719 echo '<put_your_name_here>' > Builder
720 ./make.sh put_your_version_id_here >../buildlog 2>&1
722 This process can take 30 mins even on a fast machine, but you should
723 eventually end up with an iso image in the build directory.
725 Notes:
727 root - the root of the file system heirarchy as presented to the
728 running system
730 rootpatch - contains files that have been modified from the standard
731 RH, and copied over the root tree as part of the build
732 procedure.
734 irtree - the file system tree that will go into the initrd (initial
735 ram disk)
737 work - a working directory used in the build process
739 usr - this should really be in 'work' as its created as part of the
740 build process. It contains the 'immutable' files that will
741 be served from the CD rather than the tmpfs containing the
742 contents of root.tar.gz. Some files that are normally in /etc
743 or /var that are large and actually unlikely to need changing
744 have been moved into /usr/root and replaced with links.
747 Ian Pratt
748 9 Sep 2003