So, during domain destruction, we do:
cpupool_rm_domain() [ in domain_destroy() ]
sched_destroy_domain() [ in complete_domain_destroy() ]
Therefore, there's a window during which, from the
scheduler's point of view, a domain stilsts outside
of any cpupool.
In fact, cpupool_rm_domain() does d->cpupool=NULL,
and we don't allow that to hold true, for anything
but the idle domain (and there are, in fact, ASSERT()s
and BUG_ON()s to that effect).
Currently, we never really check d->cpupool during the
window, but that does not mean the race is not there.
For instance, Credit2 at some point (during load balancing)
iterates on the list of domains, and if we add logic that
needs checking d->cpupool, and any one of them had
cpupool_rm_domain() called on itself already... Boom!
(In fact, calling __vcpu_has_soft_affinity() from inside
balance_load() makes `xl shutdown <domid>' reliably
crash, and this is how I discovered this.)
On the other hand, cpupool_rm_domain() "only" does
cpupool related bookkeeping, and there's no harm
postponing it a little bit.
Also, considering that, during domain initialization,
we do:
cpupool_add_domain()
sched_init_domain()
It makes sense for the destruction path to look like
the opposite of it, i.e.:
sched_destroy_domain()
cpupool_rm_domain()
And hence that's what this patch does.
Actually, for better robustness, what we really do is
moving both cpupool_add_domain() and cpupool_rm_domain()
inside sched_init_domain() and sched_destroy_domain(),
respectively (and also add a couple of ASSERT()-s).
Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Juergen Gross <jgross@suse.com>
Acked-by: George Dunlap <george.dunlap@citrix.com>
goto fail;
init_status |= INIT_arch;
- if ( (err = cpupool_add_domain(d, poolid)) != 0 )
- goto fail;
-
- if ( (err = sched_init_domain(d)) != 0 )
+ if ( (err = sched_init_domain(d, poolid)) != 0 )
goto fail;
if ( (err = late_hwdom_init(d)) != 0 )
TRACE_1D(TRC_DOM0_DOM_REM, d->domain_id);
- cpupool_rm_domain(d);
-
/* Delete from task list and task hashtable. */
spin_lock(&domlist_update_lock);
pd = &domain_list;
SCHED_OP(VCPU2OP(v), free_vdata, v->sched_priv);
}
-int sched_init_domain(struct domain *d)
+int sched_init_domain(struct domain *d, int poolid)
{
+ int ret;
+
+ ASSERT(d->cpupool == NULL);
+
+ if ( (ret = cpupool_add_domain(d, poolid)) )
+ return ret;
+
SCHED_STAT_CRANK(dom_init);
TRACE_1D(TRC_SCHED_DOM_ADD, d->domain_id);
return SCHED_OP(DOM2OP(d), init_domain, d);
void sched_destroy_domain(struct domain *d)
{
+ ASSERT(d->cpupool != NULL || is_idle_domain(d));
+
SCHED_STAT_CRANK(dom_destroy);
TRACE_1D(TRC_SCHED_DOM_REM, d->domain_id);
SCHED_OP(DOM2OP(d), destroy_domain, d);
+
+ cpupool_rm_domain(d);
}
void vcpu_sleep_nosync(struct vcpu *v)
void scheduler_init(void);
int sched_init_vcpu(struct vcpu *v, unsigned int processor);
void sched_destroy_vcpu(struct vcpu *v);
-int sched_init_domain(struct domain *d);
+int sched_init_domain(struct domain *d, int poolid);
void sched_destroy_domain(struct domain *d);
int sched_move_domain(struct domain *d, struct cpupool *c);
long sched_adjust(struct domain *, struct xen_domctl_scheduler_op *);