or (even in cases where there is no race, e.g., outside
of Credit2) avoid using a time sample which may be rather
old, and hence stale.
In fact, we should only sample NOW() from _inside_
the critical region within which the value we read is
used. If we don't, in case we have to spin for a while
before entering the region, when actually using it:
1) we will use something that, at the veryy least, is
not really "now", because of the spinning,
2) if someone else sampled NOW() during a critical
region protected by the lock we are spinning on,
and if we compare the two samples when we get
inside our region, our one will be 'earlier',
even if we actually arrived later, which is a
race.
In Credit2, we see an instance of 2), in runq_tickle(),
when it is called by csched2_context_saved() as it samples
NOW() before acquiring the runq lock. This makes things
look like the time went backwards, and it confuses the
algorithm (there's even a d2printk() about it, which would
trigger all the time, if enabled).
In RTDS, something similar happens in repl_timer_handler(),
and there's another instance in schedule() (in generic code),
so fix these cases too.
While there, improve csched2_vcpu_wake() and and rt_vcpu_wake()
a little as well (removing a pointless initialization, and
moving the sampling a bit closer to its use). These two hunks
entail no further functional changes.
Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Reviewed-by: George Dunlap <george.dunlap@citrix.com>
Reviewed-by: Meng Xu <mengxu@cis.upenn.edu>
RTDS: fix another instance of the 'read NOW()' race
which was overlooked in
779511f4bf5ae ("sched: avoid
races on time values read from NOW()").
Reported-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Reviewed-by: Meng Xu <mengxu@cis.upenn.edu>
master commit:
779511f4bf5ae34820a85e4eb20d50c60f69e977
master date: 2016-05-23 14:39:51 +0200
master commit:
4074e4ebe9115ac4986f963a13feada3e0560460
master date: 2016-05-25 14:33:57 +0200
csched2_vcpu_wake(const struct scheduler *ops, struct vcpu *vc)
{
struct csched2_vcpu * const svc = CSCHED2_VCPU(vc);
- s_time_t now = 0;
+ s_time_t now;
/* Schedule lock should be held at this point. */
csched2_context_saved(const struct scheduler *ops, struct vcpu *vc)
{
struct csched2_vcpu * const svc = CSCHED2_VCPU(vc);
- s_time_t now = NOW();
spinlock_t *lock = vcpu_schedule_lock_irq(vc);
+ s_time_t now = NOW();
BUG_ON( !is_idle_vcpu(vc) && svc->rqd != RQD(ops, vc->processor));
rt_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
{
struct rt_vcpu *svc = rt_vcpu(vc);
- s_time_t now = NOW();
+ s_time_t now;
spinlock_t *lock;
/* not addlocate idle vcpu to dom vcpu list */
return;
lock = vcpu_schedule_lock_irq(vc);
+
+ now = NOW();
if ( now >= svc->cur_deadline )
rt_update_deadline(now, svc);
rt_vcpu_wake(const struct scheduler *ops, struct vcpu *vc)
{
struct rt_vcpu * const svc = rt_vcpu(vc);
- s_time_t now = NOW();
+ s_time_t now;
struct rt_private *prv = rt_priv(ops);
struct rt_vcpu *snext = NULL; /* highest priority on RunQ */
struct rt_dom *sdom = NULL;
return;
}
+ now = NOW();
if ( now >= svc->cur_deadline)
rt_update_deadline(now, svc);
static void schedule(void)
{
struct vcpu *prev = current, *next = NULL;
- s_time_t now = NOW();
+ s_time_t now;
struct scheduler *sched;
unsigned long *tasklet_work = &this_cpu(tasklet_work_to_do);
bool_t tasklet_work_scheduled = 0;
lock = pcpu_schedule_lock_irq(cpu);
+ now = NOW();
+
stop_timer(&sd->s_timer);
/* get policy-specific decision on scheduling... */