Results of running 'specjbb2005' on increasing (from 1 to 8) number of (PV, for now) guests on the same shared hosts.
Numbers come from the execution of 5 runs of the benchmak for each configuration. Stats of the execution of the benchmark within VM#1
are shown in this page. This is exactly the same than
the other 'specjbb2005' run, apart from the fact that this had my patch series about NUMA support
(this one) applied!
Host and guests characteristics were the following:
- HOST: 16 CPUs, 2 NUMA nodes, 12GB RAM (2GB reserved for Dom0);
- GUESTS: 4 VCPUs, 1GB RAM.
The various lines in the plots comes with the following meaning:
- "default" is just the defaul Xen/xl behaviour when creating the various guests with 'xl create' and letting them run free
of any specific constraint wrt VCPU scheduling and memory allocation.
- "pinned" means that _only_ VM#1 was pinned on NODE#0 after being created, i.e., it has his memory striped on both the nodes
but can only be scheduled on the fist one.
- "best case" means that _only VM#1 was created on NODE#0 and stays pinned to it, i.e., it has all its memory allocated from
the node where it actually executes. We call it the best case as all its memory accesses are local wrt NUMA characteristics
of the host.
- "worst case" means that _only_ VM#1 was created on NODE#0 and then moved (by explicit pinning of its VCPUs) on NODE#1, i.e.,
it has all its memory allocated on a remote node. We call it the worst case as none of its memory accesses are local wrt NUMA
characteristics of the host.
- "node affinity" means that _only_ VM#1 was created with NUMA node affinity with NODE#0, i.e., it has all its memory allocated
there, and the scheduler is doing its best to keep it running on that same node, even if it is not pinned to its VCPUs.
In all the cases, the other VMs involved in the specific run were created with the default memory allocation policy (striping)
and scheduled without any constraint on all the host CPUs.