Because the location of the lock can change between the time you read
it and the time you grab it, the per-cpu schedule locks need to check
after lock acquisition that the lock location hasn't changed, and
release and re-try if so. This change was effected throughout the
source code, but one very important place was apparently missed: in
schedule() itself.
Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>
Committed-by: Keir Fraser <keir@xen.org>
xen-unstable changeset: 25162:
478bec603d3d
xen-unstable date: Tue Apr 10 10:41:30 2012 +0100
bool_t tasklet_work_scheduled = 0;
struct schedule_data *sd;
struct task_slice next_slice;
+ int cpu = smp_processor_id();
ASSERT(!in_atomic());
BUG();
}
- spin_lock_irq(sd->schedule_lock);
+ pcpu_schedule_lock_irq(cpu);
stop_timer(&sd->s_timer);
if ( unlikely(prev == next) )
{
- spin_unlock_irq(sd->schedule_lock);
+ pcpu_schedule_unlock_irq(cpu);
trace_continue_running(next);
return continue_running(prev);
}
ASSERT(!next->is_running);
next->is_running = 1;
- spin_unlock_irq(sd->schedule_lock);
+ pcpu_schedule_unlock_irq(cpu);
perfc_incr(sched_ctx);