c/s 896 (use of blk_rq_map_sg()) made the problem worse, but from what
I can tell there had been races (ring and stats updates) before. If
that's not a correct observation, a perhaps better solution might be
to move the struct scatterlist array out of struct blktap (and make it
e.g. an on-stack variable, the problem being that
blktap_device_process_request() has a pretty large stack frame
already - shrinking this might be possible by moving e.g. the
struct blktap_grant_table and struct blkif_request blkif_req instances
the other way if the locking change here is the right thing to do).
Signed-off-by: Jan Beulich <jbeulich@novell.com>
blkdev_dequeue_request(req);
spin_unlock_irq(&dev->lock);
- down_read(&tap->tap_sem);
+ down_write(&tap->tap_sem);
err = blktap_device_process_request(tap, request, req);
if (!err)
blktap_request_free(tap, request);
}
- up_read(&tap->tap_sem);
+ up_write(&tap->tap_sem);
spin_lock_irq(&dev->lock);
}