Hi. The Xen Project operates a CI system in a datacentre in Marlborough, Massachusetts. [ testid growth slide ] The number of tests keep growing, with many contributions recently. This is a jolly good thing. We don't want to slow down development, so we need to keep pace with that. The Xen Project Advisory Board has approved a total hardware spend of about a hundred thousand dollars this year, for testing. I thought I'd give a rundown of the hardware that's there and particularly of the new hardware we are buying this year (some of which is being delivered installed right now). [ rack development slide ] The Xen Project test lab has been running since around 2015. The original procurement in 2014/2015 provided us with 24 low-end x86 test boxes. We also built in the Citrix Cambridge office, in the UK, a 4U crate containing 8 ARM32 development boards. This was shipped to Massachusetts and is now also part of the facility. After some initial teething troubles, almost all of that equipment is now properly working and in service. Four of the 8 ARM32 boards have a reliability problem with their network controller, which causes some random failures and annoyance. More about that later. Since then, the 64-bit ARM architecture has come onto the stage and such hardware has started to be available. We recently purchased two Softiron Overdrive ARM64 servers, and two Cavium ThunderX systems. The Softiron servers are in production now but we are still struggling slightly with firmware bugs. The Cavium systems were delivered very recently, and I'm told full testing will need to wait for some errata workarounds to be present in official versions of Xen. But we think that there is enough Linux support to use them for builds, which will relieve pressure on the Softiron servers. We are also talking about getting some x86 systems designed for client use. These are interesting because they'd allow us to test power management features, as well as (of course) having yet more different CPUs and chipsets. One thing you might notice about this slide is that there are generally two dates for each tranche of machines. This is because when getting new hardware, it can often take a surprising length of time to get it properly into production. Even very ordinary x86 test boxes will sometimes turn out to have some oddities. For example, we had one pair of fairly normal x86 test hosts whose vendor was unable to get them to reboot reliably. After 12 months of futile efforts by their tech support, they gave us a replacement pair of completely different machines! The ARM ecosystem has greater a diversity of silicon suppliers which can present its own challenges. One particular area of difficulty has been ARM32. Xen on 32-bit ARM hardware is important for embedded applications, but much embedded hardware is hard to use for software testing: hardware dev boards are often in interesting form factors, and have other features which make them hard to deploy in a datacentre environment. This is why when Xen on ARM was very new, we built a custom 4U box containing a number of development boards and support circuitry etc. This crate is not easily replicated, so our ARM32 capacity is rather small now limited. Also four of the boards are of a type which has some network reliability problems. I'm glad to say that we have a good lead on improving this situation: EPAM are looking into making a backplane that would allow a number (hopefully, six) of their devboards to be deployed in a fairly standard 2U rackmount case. I await this development with interest! Another bottleneck is that much of the hardware acceptance testing and deployment management has to be done by just me. Our technical support partners at Credativ have been doing a lot of the physical work and general system administration, but we can't expect them to be fully up to speed on osstest's requirements, and the Xen community's needs. I don't have a co-administrator (nor a co-maintainer for the osstest codebase). Volunteers very welcome! [ rack layout slide ] For those interested in the hardware and system administration plumbing, here's a diagram of the physical infrastructure. We use hardware serial - actual RS232 - for all of the logging. This is because on-board logging (for example as provided by IPMI) can be less reliable. Likewise, we have network-controlled power distribution units which allow each test box to be powered on and off under software control. With the test boxes all configured to netboot this means that we can completely wipe and reinstall a test box after a test. So we don't need to worry about recovering from the crashes which might occur while testing development versions of Xen. The two current VM hosts each have a connection to the global internet. They are running Xen (of course). They also host the multiport serial cards, which are passed through (with PCI passthrough) to dedicated serial concentrator VMs. Each of the two VM hosts has its own console connected to the other's serial concentrator. This means that if one of the machines fails and needs recovery action we can try to recover it from the console. We also get a console log for each one that way as well. This is all achieved without a serial concentrator appliance (which would be unsuitable for exposing on the global internet anyway). The hardware for the 2nd rack is being delivered in batches around now. In the 2nd rack we have only one PDU, based on our current usage in the first rack. We are also upgrading the RAM in our VM hosts, so we are moving from two 32G servers to three 64G servers. That will give us some room for growth and also make it somewhat easier to work around hardware failue. You'll see the test box names here on the slide. They appear in test reports too. I decided to name the test boxes after fruit. x86 are wine grape cultivars (red grapes for AMD; white for Intel), and ARM are cultivars of prunus (plums, prunes, and so on).