If, when vcpu x wakes up, there are no idle pcpus in x's
soft-affinity, we just go ahead and look at its hard
affinity. This basically means that, if, in __runq_tickle(),
new_idlers_empty is true, balance_step is equal to
CSCHED_BALANCE_HARD_AFFINITY, and that calling
csched_balance_cpumask() for whatever vcpu, would just
return the vcpu's cpu_hard_affinity.
Therefore, don't bother calling it (it's just pure
overhead) and use cpu_hard_affinity directly.
For this very reason, this patch should only be
a (slight) optimization, and entail no functional
change.
As a side note, it would make sense to do what the
patch does, even if we could be inside the
[[ new_idlers_empty && new->pri > cur->pri ]] if
with balance_step equal to CSCHED_BALANCE_SOFT_AFFINITY.
In fact, what is actually happening is:
- vcpu x is waking up, and (since there aren't suitable
idlers, and it's entitled for it) it is preempting
vcpu y;
- vcpu y's hard-affinity is a superset of its
soft-affinity mask.
Therefore, it makes sense to use wider possible mask,
as by doing that, we maximize the probability of
finding an idle pcpu in there, to which we can send
vcpu y, which then will be able to run.
While there, also fix the comment, which included
an awkward parenthesis nesting.
Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Acked-by: George Dunlap <george.dunlap@citrix.com>
/*
* If there are no suitable idlers for new, and it's higher
* priority than cur, check whether we can migrate cur away.
- * (We have to do it indirectly, via _VPF_migrating, instead
+ * We have to do it indirectly, via _VPF_migrating (instead
* of just tickling any idler suitable for cur) because cur
- * is running.)
+ * is running.
*
* If there are suitable idlers for new, no matter priorities,
* leave cur alone (as it is running and is, likely, cache-hot)
*/
if ( new_idlers_empty && new->pri > cur->pri )
{
- csched_balance_cpumask(cur->vcpu, balance_step,
- cpumask_scratch_cpu(cpu));
- if ( cpumask_intersects(cpumask_scratch_cpu(cpu),
+ if ( cpumask_intersects(cur->vcpu->cpu_hard_affinity,
&idle_mask) )
{
SCHED_VCPU_STAT_CRANK(cur, kicked_away);