Load balancing, when happening, at the end of a "scheduler epoch", can
trigger vcpu migration, which in its turn may call runq_tickle(). If the
cpu where this happens was idle, but we're now going to schedule a vcpu
on it, let's update the runq's idle cpus mask accordingly _before_ doing
load balancing.
Not doing that, in fact, may cause runq_tickle() to think that the cpu
is still idle, and tickle it to go pick up a vcpu from the runqueue,
which might be wrong/unideal.
Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
Reviewed-by: George Dunlap <george.dunlap@citrix.com>
__set_bit(__CSFLAG_scheduled, &snext->flags);
}
+ /* Clear the idle mask if necessary */
+ if ( cpumask_test_cpu(cpu, &rqd->idle) )
+ {
+ __cpumask_clear_cpu(cpu, &rqd->idle);
+ smt_idle_mask_clear(cpu, &rqd->smt_idle);
+ }
+
/*
* The reset condition is "has a scheduler epoch come to an end?".
* The way this is enforced is checking whether the vcpu at the top
balance_load(ops, cpu, now);
}
- /* Clear the idle mask if necessary */
- if ( cpumask_test_cpu(cpu, &rqd->idle) )
- {
- __cpumask_clear_cpu(cpu, &rqd->idle);
- smt_idle_mask_clear(cpu, &rqd->smt_idle);
- }
-
snext->start_time = now;
snext->tickled_cpu = -1;