]> xenbits.xensource.com Git - xen.git/commitdiff
xen: credit1: small optimization in Credit1's tickling logic.
authorDario Faggioli <dario.faggioli@citrix.com>
Thu, 15 Sep 2016 11:35:03 +0000 (12:35 +0100)
committerGeorge Dunlap <george.dunlap@citrix.com>
Thu, 15 Sep 2016 11:39:32 +0000 (12:39 +0100)
If, when vcpu x wakes up, there are no idle pcpus in x's
soft-affinity, we just go ahead and look at its hard
affinity. This basically means that, if, in __runq_tickle(),
new_idlers_empty is true, balance_step is equal to
CSCHED_BALANCE_HARD_AFFINITY, and that calling
csched_balance_cpumask() for whatever vcpu, would just
return the vcpu's cpu_hard_affinity.

Therefore, don't bother calling it (it's just pure
overhead) and use cpu_hard_affinity directly.

For this very reason, this patch should only be
a (slight) optimization, and entail no functional
change.

As a side note, it would make sense to do what the
patch does, even if we could be inside the
[[ new_idlers_empty && new->pri > cur->pri ]] if
with balance_step equal to CSCHED_BALANCE_SOFT_AFFINITY.
In fact, what is actually happening is:
 - vcpu x is waking up, and (since there aren't suitable
   idlers, and it's entitled for it) it is preempting
   vcpu y;
 - vcpu y's hard-affinity is a superset of its
   soft-affinity mask.

Therefore, it makes sense to use wider possible mask,
as by doing that, we maximize the probability of
finding an idle pcpu in there, to which we can send
vcpu y, which then will be able to run.

While there, also fix the comment, which included
an awkward parenthesis nesting.

Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Acked-by: George Dunlap <george.dunlap@citrix.com>
xen/common/sched_credit.c

index c2b4b24579736e0b7e289f5eb3e78b9d324def75..472fb2893ac5c8cab40c46ff4eea308f62adaa29 100644 (file)
@@ -433,9 +433,9 @@ static inline void __runq_tickle(struct csched_vcpu *new)
             /*
              * If there are no suitable idlers for new, and it's higher
              * priority than cur, check whether we can migrate cur away.
-             * (We have to do it indirectly, via _VPF_migrating, instead
+             * We have to do it indirectly, via _VPF_migrating (instead
              * of just tickling any idler suitable for cur) because cur
-             * is running.)
+             * is running.
              *
              * If there are suitable idlers for new, no matter priorities,
              * leave cur alone (as it is running and is, likely, cache-hot)
@@ -444,9 +444,7 @@ static inline void __runq_tickle(struct csched_vcpu *new)
              */
             if ( new_idlers_empty && new->pri > cur->pri )
             {
-                csched_balance_cpumask(cur->vcpu, balance_step,
-                                       cpumask_scratch_cpu(cpu));
-                if ( cpumask_intersects(cpumask_scratch_cpu(cpu),
+                if ( cpumask_intersects(cur->vcpu->cpu_hard_affinity,
                                         &idle_mask) )
                 {
                     SCHED_VCPU_STAT_CRANK(cur, kicked_away);