ia64/linux-2.6.18-xen.hg

view Documentation/RCU/rcuref.txt @ 854:950b9eb27661

usbback: fix urb interval value for interrupt urbs.

Signed-off-by: Noboru Iwamatsu <n_iwamatsu@jp.fujitsu.com>
author Keir Fraser <keir.fraser@citrix.com>
date Mon Apr 06 13:51:20 2009 +0100 (2009-04-06)
parents 831230e53067
children
line source
1 Reference-count design for elements of lists/arrays protected by RCU.
3 Reference counting on elements of lists which are protected by traditional
4 reader/writer spinlocks or semaphores are straightforward:
6 1. 2.
7 add() search_and_reference()
8 { {
9 alloc_object read_lock(&list_lock);
10 ... search_for_element
11 atomic_set(&el->rc, 1); atomic_inc(&el->rc);
12 write_lock(&list_lock); ...
13 add_element read_unlock(&list_lock);
14 ... ...
15 write_unlock(&list_lock); }
16 }
18 3. 4.
19 release_referenced() delete()
20 { {
21 ... write_lock(&list_lock);
22 atomic_dec(&el->rc, relfunc) ...
23 ... delete_element
24 } write_unlock(&list_lock);
25 ...
26 if (atomic_dec_and_test(&el->rc))
27 kfree(el);
28 ...
29 }
31 If this list/array is made lock free using RCU as in changing the
32 write_lock() in add() and delete() to spin_lock and changing read_lock
33 in search_and_reference to rcu_read_lock(), the atomic_get in
34 search_and_reference could potentially hold reference to an element which
35 has already been deleted from the list/array. Use atomic_inc_not_zero()
36 in this scenario as follows:
38 1. 2.
39 add() search_and_reference()
40 { {
41 alloc_object rcu_read_lock();
42 ... search_for_element
43 atomic_set(&el->rc, 1); if (atomic_inc_not_zero(&el->rc)) {
44 write_lock(&list_lock); rcu_read_unlock();
45 return FAIL;
46 add_element }
47 ... ...
48 write_unlock(&list_lock); rcu_read_unlock();
49 } }
50 3. 4.
51 release_referenced() delete()
52 { {
53 ... write_lock(&list_lock);
54 if (atomic_dec_and_test(&el->rc)) ...
55 call_rcu(&el->head, el_free); delete_element
56 ... write_unlock(&list_lock);
57 } ...
58 if (atomic_dec_and_test(&el->rc))
59 call_rcu(&el->head, el_free);
60 ...
61 }
63 Sometimes, a reference to the element needs to be obtained in the
64 update (write) stream. In such cases, atomic_inc_not_zero() might be
65 overkill, since we hold the update-side spinlock. One might instead
66 use atomic_inc() in such cases.