ia64/linux-2.6.18-xen.hg

view Documentation/memory-barriers.txt @ 854:950b9eb27661

usbback: fix urb interval value for interrupt urbs.

Signed-off-by: Noboru Iwamatsu <n_iwamatsu@jp.fujitsu.com>
author Keir Fraser <keir.fraser@citrix.com>
date Mon Apr 06 13:51:20 2009 +0100 (2009-04-06)
parents 831230e53067
children
line source
1 ============================
2 LINUX KERNEL MEMORY BARRIERS
3 ============================
5 By: David Howells <dhowells@redhat.com>
7 Contents:
9 (*) Abstract memory access model.
11 - Device operations.
12 - Guarantees.
14 (*) What are memory barriers?
16 - Varieties of memory barrier.
17 - What may not be assumed about memory barriers?
18 - Data dependency barriers.
19 - Control dependencies.
20 - SMP barrier pairing.
21 - Examples of memory barrier sequences.
22 - Read memory barriers vs load speculation.
24 (*) Explicit kernel barriers.
26 - Compiler barrier.
27 - The CPU memory barriers.
28 - MMIO write barrier.
30 (*) Implicit kernel memory barriers.
32 - Locking functions.
33 - Interrupt disabling functions.
34 - Miscellaneous functions.
36 (*) Inter-CPU locking barrier effects.
38 - Locks vs memory accesses.
39 - Locks vs I/O accesses.
41 (*) Where are memory barriers needed?
43 - Interprocessor interaction.
44 - Atomic operations.
45 - Accessing devices.
46 - Interrupts.
48 (*) Kernel I/O barrier effects.
50 (*) Assumed minimum execution ordering model.
52 (*) The effects of the cpu cache.
54 - Cache coherency.
55 - Cache coherency vs DMA.
56 - Cache coherency vs MMIO.
58 (*) The things CPUs get up to.
60 - And then there's the Alpha.
62 (*) References.
65 ============================
66 ABSTRACT MEMORY ACCESS MODEL
67 ============================
69 Consider the following abstract model of the system:
71 : :
72 : :
73 : :
74 +-------+ : +--------+ : +-------+
75 | | : | | : | |
76 | | : | | : | |
77 | CPU 1 |<----->| Memory |<----->| CPU 2 |
78 | | : | | : | |
79 | | : | | : | |
80 +-------+ : +--------+ : +-------+
81 ^ : ^ : ^
82 | : | : |
83 | : | : |
84 | : v : |
85 | : +--------+ : |
86 | : | | : |
87 | : | | : |
88 +---------->| Device |<----------+
89 : | | :
90 : | | :
91 : +--------+ :
92 : :
94 Each CPU executes a program that generates memory access operations. In the
95 abstract CPU, memory operation ordering is very relaxed, and a CPU may actually
96 perform the memory operations in any order it likes, provided program causality
97 appears to be maintained. Similarly, the compiler may also arrange the
98 instructions it emits in any order it likes, provided it doesn't affect the
99 apparent operation of the program.
101 So in the above diagram, the effects of the memory operations performed by a
102 CPU are perceived by the rest of the system as the operations cross the
103 interface between the CPU and rest of the system (the dotted lines).
106 For example, consider the following sequence of events:
108 CPU 1 CPU 2
109 =============== ===============
110 { A == 1; B == 2 }
111 A = 3; x = A;
112 B = 4; y = B;
114 The set of accesses as seen by the memory system in the middle can be arranged
115 in 24 different combinations:
117 STORE A=3, STORE B=4, x=LOAD A->3, y=LOAD B->4
118 STORE A=3, STORE B=4, y=LOAD B->4, x=LOAD A->3
119 STORE A=3, x=LOAD A->3, STORE B=4, y=LOAD B->4
120 STORE A=3, x=LOAD A->3, y=LOAD B->2, STORE B=4
121 STORE A=3, y=LOAD B->2, STORE B=4, x=LOAD A->3
122 STORE A=3, y=LOAD B->2, x=LOAD A->3, STORE B=4
123 STORE B=4, STORE A=3, x=LOAD A->3, y=LOAD B->4
124 STORE B=4, ...
125 ...
127 and can thus result in four different combinations of values:
129 x == 1, y == 2
130 x == 1, y == 4
131 x == 3, y == 2
132 x == 3, y == 4
135 Furthermore, the stores committed by a CPU to the memory system may not be
136 perceived by the loads made by another CPU in the same order as the stores were
137 committed.
140 As a further example, consider this sequence of events:
142 CPU 1 CPU 2
143 =============== ===============
144 { A == 1, B == 2, C = 3, P == &A, Q == &C }
145 B = 4; Q = P;
146 P = &B D = *Q;
148 There is an obvious data dependency here, as the value loaded into D depends on
149 the address retrieved from P by CPU 2. At the end of the sequence, any of the
150 following results are possible:
152 (Q == &A) and (D == 1)
153 (Q == &B) and (D == 2)
154 (Q == &B) and (D == 4)
156 Note that CPU 2 will never try and load C into D because the CPU will load P
157 into Q before issuing the load of *Q.
160 DEVICE OPERATIONS
161 -----------------
163 Some devices present their control interfaces as collections of memory
164 locations, but the order in which the control registers are accessed is very
165 important. For instance, imagine an ethernet card with a set of internal
166 registers that are accessed through an address port register (A) and a data
167 port register (D). To read internal register 5, the following code might then
168 be used:
170 *A = 5;
171 x = *D;
173 but this might show up as either of the following two sequences:
175 STORE *A = 5, x = LOAD *D
176 x = LOAD *D, STORE *A = 5
178 the second of which will almost certainly result in a malfunction, since it set
179 the address _after_ attempting to read the register.
182 GUARANTEES
183 ----------
185 There are some minimal guarantees that may be expected of a CPU:
187 (*) On any given CPU, dependent memory accesses will be issued in order, with
188 respect to itself. This means that for:
190 Q = P; D = *Q;
192 the CPU will issue the following memory operations:
194 Q = LOAD P, D = LOAD *Q
196 and always in that order.
198 (*) Overlapping loads and stores within a particular CPU will appear to be
199 ordered within that CPU. This means that for:
201 a = *X; *X = b;
203 the CPU will only issue the following sequence of memory operations:
205 a = LOAD *X, STORE *X = b
207 And for:
209 *X = c; d = *X;
211 the CPU will only issue:
213 STORE *X = c, d = LOAD *X
215 (Loads and stores overlap if they are targetted at overlapping pieces of
216 memory).
218 And there are a number of things that _must_ or _must_not_ be assumed:
220 (*) It _must_not_ be assumed that independent loads and stores will be issued
221 in the order given. This means that for:
223 X = *A; Y = *B; *D = Z;
225 we may get any of the following sequences:
227 X = LOAD *A, Y = LOAD *B, STORE *D = Z
228 X = LOAD *A, STORE *D = Z, Y = LOAD *B
229 Y = LOAD *B, X = LOAD *A, STORE *D = Z
230 Y = LOAD *B, STORE *D = Z, X = LOAD *A
231 STORE *D = Z, X = LOAD *A, Y = LOAD *B
232 STORE *D = Z, Y = LOAD *B, X = LOAD *A
234 (*) It _must_ be assumed that overlapping memory accesses may be merged or
235 discarded. This means that for:
237 X = *A; Y = *(A + 4);
239 we may get any one of the following sequences:
241 X = LOAD *A; Y = LOAD *(A + 4);
242 Y = LOAD *(A + 4); X = LOAD *A;
243 {X, Y} = LOAD {*A, *(A + 4) };
245 And for:
247 *A = X; Y = *A;
249 we may get either of:
251 STORE *A = X; Y = LOAD *A;
252 STORE *A = Y = X;
255 =========================
256 WHAT ARE MEMORY BARRIERS?
257 =========================
259 As can be seen above, independent memory operations are effectively performed
260 in random order, but this can be a problem for CPU-CPU interaction and for I/O.
261 What is required is some way of intervening to instruct the compiler and the
262 CPU to restrict the order.
264 Memory barriers are such interventions. They impose a perceived partial
265 ordering over the memory operations on either side of the barrier.
267 Such enforcement is important because the CPUs and other devices in a system
268 can use a variety of tricks to improve performance - including reordering,
269 deferral and combination of memory operations; speculative loads; speculative
270 branch prediction and various types of caching. Memory barriers are used to
271 override or suppress these tricks, allowing the code to sanely control the
272 interaction of multiple CPUs and/or devices.
275 VARIETIES OF MEMORY BARRIER
276 ---------------------------
278 Memory barriers come in four basic varieties:
280 (1) Write (or store) memory barriers.
282 A write memory barrier gives a guarantee that all the STORE operations
283 specified before the barrier will appear to happen before all the STORE
284 operations specified after the barrier with respect to the other
285 components of the system.
287 A write barrier is a partial ordering on stores only; it is not required
288 to have any effect on loads.
290 A CPU can be viewed as committing a sequence of store operations to the
291 memory system as time progresses. All stores before a write barrier will
292 occur in the sequence _before_ all the stores after the write barrier.
294 [!] Note that write barriers should normally be paired with read or data
295 dependency barriers; see the "SMP barrier pairing" subsection.
298 (2) Data dependency barriers.
300 A data dependency barrier is a weaker form of read barrier. In the case
301 where two loads are performed such that the second depends on the result
302 of the first (eg: the first load retrieves the address to which the second
303 load will be directed), a data dependency barrier would be required to
304 make sure that the target of the second load is updated before the address
305 obtained by the first load is accessed.
307 A data dependency barrier is a partial ordering on interdependent loads
308 only; it is not required to have any effect on stores, independent loads
309 or overlapping loads.
311 As mentioned in (1), the other CPUs in the system can be viewed as
312 committing sequences of stores to the memory system that the CPU being
313 considered can then perceive. A data dependency barrier issued by the CPU
314 under consideration guarantees that for any load preceding it, if that
315 load touches one of a sequence of stores from another CPU, then by the
316 time the barrier completes, the effects of all the stores prior to that
317 touched by the load will be perceptible to any loads issued after the data
318 dependency barrier.
320 See the "Examples of memory barrier sequences" subsection for diagrams
321 showing the ordering constraints.
323 [!] Note that the first load really has to have a _data_ dependency and
324 not a control dependency. If the address for the second load is dependent
325 on the first load, but the dependency is through a conditional rather than
326 actually loading the address itself, then it's a _control_ dependency and
327 a full read barrier or better is required. See the "Control dependencies"
328 subsection for more information.
330 [!] Note that data dependency barriers should normally be paired with
331 write barriers; see the "SMP barrier pairing" subsection.
334 (3) Read (or load) memory barriers.
336 A read barrier is a data dependency barrier plus a guarantee that all the
337 LOAD operations specified before the barrier will appear to happen before
338 all the LOAD operations specified after the barrier with respect to the
339 other components of the system.
341 A read barrier is a partial ordering on loads only; it is not required to
342 have any effect on stores.
344 Read memory barriers imply data dependency barriers, and so can substitute
345 for them.
347 [!] Note that read barriers should normally be paired with write barriers;
348 see the "SMP barrier pairing" subsection.
351 (4) General memory barriers.
353 A general memory barrier gives a guarantee that all the LOAD and STORE
354 operations specified before the barrier will appear to happen before all
355 the LOAD and STORE operations specified after the barrier with respect to
356 the other components of the system.
358 A general memory barrier is a partial ordering over both loads and stores.
360 General memory barriers imply both read and write memory barriers, and so
361 can substitute for either.
364 And a couple of implicit varieties:
366 (5) LOCK operations.
368 This acts as a one-way permeable barrier. It guarantees that all memory
369 operations after the LOCK operation will appear to happen after the LOCK
370 operation with respect to the other components of the system.
372 Memory operations that occur before a LOCK operation may appear to happen
373 after it completes.
375 A LOCK operation should almost always be paired with an UNLOCK operation.
378 (6) UNLOCK operations.
380 This also acts as a one-way permeable barrier. It guarantees that all
381 memory operations before the UNLOCK operation will appear to happen before
382 the UNLOCK operation with respect to the other components of the system.
384 Memory operations that occur after an UNLOCK operation may appear to
385 happen before it completes.
387 LOCK and UNLOCK operations are guaranteed to appear with respect to each
388 other strictly in the order specified.
390 The use of LOCK and UNLOCK operations generally precludes the need for
391 other sorts of memory barrier (but note the exceptions mentioned in the
392 subsection "MMIO write barrier").
395 Memory barriers are only required where there's a possibility of interaction
396 between two CPUs or between a CPU and a device. If it can be guaranteed that
397 there won't be any such interaction in any particular piece of code, then
398 memory barriers are unnecessary in that piece of code.
401 Note that these are the _minimum_ guarantees. Different architectures may give
402 more substantial guarantees, but they may _not_ be relied upon outside of arch
403 specific code.
406 WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS?
407 ----------------------------------------------
409 There are certain things that the Linux kernel memory barriers do not guarantee:
411 (*) There is no guarantee that any of the memory accesses specified before a
412 memory barrier will be _complete_ by the completion of a memory barrier
413 instruction; the barrier can be considered to draw a line in that CPU's
414 access queue that accesses of the appropriate type may not cross.
416 (*) There is no guarantee that issuing a memory barrier on one CPU will have
417 any direct effect on another CPU or any other hardware in the system. The
418 indirect effect will be the order in which the second CPU sees the effects
419 of the first CPU's accesses occur, but see the next point:
421 (*) There is no guarantee that a CPU will see the correct order of effects
422 from a second CPU's accesses, even _if_ the second CPU uses a memory
423 barrier, unless the first CPU _also_ uses a matching memory barrier (see
424 the subsection on "SMP Barrier Pairing").
426 (*) There is no guarantee that some intervening piece of off-the-CPU
427 hardware[*] will not reorder the memory accesses. CPU cache coherency
428 mechanisms should propagate the indirect effects of a memory barrier
429 between CPUs, but might not do so in order.
431 [*] For information on bus mastering DMA and coherency please read:
433 Documentation/pci.txt
434 Documentation/DMA-mapping.txt
435 Documentation/DMA-API.txt
438 DATA DEPENDENCY BARRIERS
439 ------------------------
441 The usage requirements of data dependency barriers are a little subtle, and
442 it's not always obvious that they're needed. To illustrate, consider the
443 following sequence of events:
445 CPU 1 CPU 2
446 =============== ===============
447 { A == 1, B == 2, C = 3, P == &A, Q == &C }
448 B = 4;
449 <write barrier>
450 P = &B
451 Q = P;
452 D = *Q;
454 There's a clear data dependency here, and it would seem that by the end of the
455 sequence, Q must be either &A or &B, and that:
457 (Q == &A) implies (D == 1)
458 (Q == &B) implies (D == 4)
460 But! CPU 2's perception of P may be updated _before_ its perception of B, thus
461 leading to the following situation:
463 (Q == &B) and (D == 2) ????
465 Whilst this may seem like a failure of coherency or causality maintenance, it
466 isn't, and this behaviour can be observed on certain real CPUs (such as the DEC
467 Alpha).
469 To deal with this, a data dependency barrier or better must be inserted
470 between the address load and the data load:
472 CPU 1 CPU 2
473 =============== ===============
474 { A == 1, B == 2, C = 3, P == &A, Q == &C }
475 B = 4;
476 <write barrier>
477 P = &B
478 Q = P;
479 <data dependency barrier>
480 D = *Q;
482 This enforces the occurrence of one of the two implications, and prevents the
483 third possibility from arising.
485 [!] Note that this extremely counterintuitive situation arises most easily on
486 machines with split caches, so that, for example, one cache bank processes
487 even-numbered cache lines and the other bank processes odd-numbered cache
488 lines. The pointer P might be stored in an odd-numbered cache line, and the
489 variable B might be stored in an even-numbered cache line. Then, if the
490 even-numbered bank of the reading CPU's cache is extremely busy while the
491 odd-numbered bank is idle, one can see the new value of the pointer P (&B),
492 but the old value of the variable B (2).
495 Another example of where data dependency barriers might by required is where a
496 number is read from memory and then used to calculate the index for an array
497 access:
499 CPU 1 CPU 2
500 =============== ===============
501 { M[0] == 1, M[1] == 2, M[3] = 3, P == 0, Q == 3 }
502 M[1] = 4;
503 <write barrier>
504 P = 1
505 Q = P;
506 <data dependency barrier>
507 D = M[Q];
510 The data dependency barrier is very important to the RCU system, for example.
511 See rcu_dereference() in include/linux/rcupdate.h. This permits the current
512 target of an RCU'd pointer to be replaced with a new modified target, without
513 the replacement target appearing to be incompletely initialised.
515 See also the subsection on "Cache Coherency" for a more thorough example.
518 CONTROL DEPENDENCIES
519 --------------------
521 A control dependency requires a full read memory barrier, not simply a data
522 dependency barrier to make it work correctly. Consider the following bit of
523 code:
525 q = &a;
526 if (p)
527 q = &b;
528 <data dependency barrier>
529 x = *q;
531 This will not have the desired effect because there is no actual data
532 dependency, but rather a control dependency that the CPU may short-circuit by
533 attempting to predict the outcome in advance. In such a case what's actually
534 required is:
536 q = &a;
537 if (p)
538 q = &b;
539 <read barrier>
540 x = *q;
543 SMP BARRIER PAIRING
544 -------------------
546 When dealing with CPU-CPU interactions, certain types of memory barrier should
547 always be paired. A lack of appropriate pairing is almost certainly an error.
549 A write barrier should always be paired with a data dependency barrier or read
550 barrier, though a general barrier would also be viable. Similarly a read
551 barrier or a data dependency barrier should always be paired with at least an
552 write barrier, though, again, a general barrier is viable:
554 CPU 1 CPU 2
555 =============== ===============
556 a = 1;
557 <write barrier>
558 b = 2; x = b;
559 <read barrier>
560 y = a;
562 Or:
564 CPU 1 CPU 2
565 =============== ===============================
566 a = 1;
567 <write barrier>
568 b = &a; x = b;
569 <data dependency barrier>
570 y = *x;
572 Basically, the read barrier always has to be there, even though it can be of
573 the "weaker" type.
575 [!] Note that the stores before the write barrier would normally be expected to
576 match the loads after the read barrier or data dependency barrier, and vice
577 versa:
579 CPU 1 CPU 2
580 =============== ===============
581 a = 1; }---- --->{ v = c
582 b = 2; } \ / { w = d
583 <write barrier> \ <read barrier>
584 c = 3; } / \ { x = a;
585 d = 4; }---- --->{ y = b;
588 EXAMPLES OF MEMORY BARRIER SEQUENCES
589 ------------------------------------
591 Firstly, write barriers act as a partial orderings on store operations.
592 Consider the following sequence of events:
594 CPU 1
595 =======================
596 STORE A = 1
597 STORE B = 2
598 STORE C = 3
599 <write barrier>
600 STORE D = 4
601 STORE E = 5
603 This sequence of events is committed to the memory coherence system in an order
604 that the rest of the system might perceive as the unordered set of { STORE A,
605 STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E
606 }:
608 +-------+ : :
609 | | +------+
610 | |------>| C=3 | } /\
611 | | : +------+ }----- \ -----> Events perceptible
612 | | : | A=1 | } \/ to rest of system
613 | | : +------+ }
614 | CPU 1 | : | B=2 | }
615 | | +------+ }
616 | | wwwwwwwwwwwwwwww } <--- At this point the write barrier
617 | | +------+ } requires all stores prior to the
618 | | : | E=5 | } barrier to be committed before
619 | | : +------+ } further stores may be take place.
620 | |------>| D=4 | }
621 | | +------+
622 +-------+ : :
623 |
624 | Sequence in which stores are committed to the
625 | memory system by CPU 1
626 V
629 Secondly, data dependency barriers act as a partial orderings on data-dependent
630 loads. Consider the following sequence of events:
632 CPU 1 CPU 2
633 ======================= =======================
634 { B = 7; X = 9; Y = 8; C = &Y }
635 STORE A = 1
636 STORE B = 2
637 <write barrier>
638 STORE C = &B LOAD X
639 STORE D = 4 LOAD C (gets &B)
640 LOAD *C (reads B)
642 Without intervention, CPU 2 may perceive the events on CPU 1 in some
643 effectively random order, despite the write barrier issued by CPU 1:
645 +-------+ : : : :
646 | | +------+ +-------+ | Sequence of update
647 | |------>| B=2 |----- --->| Y->8 | | of perception on
648 | | : +------+ \ +-------+ | CPU 2
649 | CPU 1 | : | A=1 | \ --->| C->&Y | V
650 | | +------+ | +-------+
651 | | wwwwwwwwwwwwwwww | : :
652 | | +------+ | : :
653 | | : | C=&B |--- | : : +-------+
654 | | : +------+ \ | +-------+ | |
655 | |------>| D=4 | ----------->| C->&B |------>| |
656 | | +------+ | +-------+ | |
657 +-------+ : : | : : | |
658 | : : | |
659 | : : | CPU 2 |
660 | +-------+ | |
661 Apparently incorrect ---> | | B->7 |------>| |
662 perception of B (!) | +-------+ | |
663 | : : | |
664 | +-------+ | |
665 The load of X holds ---> \ | X->9 |------>| |
666 up the maintenance \ +-------+ | |
667 of coherence of B ----->| B->2 | +-------+
668 +-------+
669 : :
672 In the above example, CPU 2 perceives that B is 7, despite the load of *C
673 (which would be B) coming after the the LOAD of C.
675 If, however, a data dependency barrier were to be placed between the load of C
676 and the load of *C (ie: B) on CPU 2:
678 CPU 1 CPU 2
679 ======================= =======================
680 { B = 7; X = 9; Y = 8; C = &Y }
681 STORE A = 1
682 STORE B = 2
683 <write barrier>
684 STORE C = &B LOAD X
685 STORE D = 4 LOAD C (gets &B)
686 <data dependency barrier>
687 LOAD *C (reads B)
689 then the following will occur:
691 +-------+ : : : :
692 | | +------+ +-------+
693 | |------>| B=2 |----- --->| Y->8 |
694 | | : +------+ \ +-------+
695 | CPU 1 | : | A=1 | \ --->| C->&Y |
696 | | +------+ | +-------+
697 | | wwwwwwwwwwwwwwww | : :
698 | | +------+ | : :
699 | | : | C=&B |--- | : : +-------+
700 | | : +------+ \ | +-------+ | |
701 | |------>| D=4 | ----------->| C->&B |------>| |
702 | | +------+ | +-------+ | |
703 +-------+ : : | : : | |
704 | : : | |
705 | : : | CPU 2 |
706 | +-------+ | |
707 | | X->9 |------>| |
708 | +-------+ | |
709 Makes sure all effects ---> \ ddddddddddddddddd | |
710 prior to the store of C \ +-------+ | |
711 are perceptible to ----->| B->2 |------>| |
712 subsequent loads +-------+ | |
713 : : +-------+
716 And thirdly, a read barrier acts as a partial order on loads. Consider the
717 following sequence of events:
719 CPU 1 CPU 2
720 ======================= =======================
721 { A = 0, B = 9 }
722 STORE A=1
723 <write barrier>
724 STORE B=2
725 LOAD B
726 LOAD A
728 Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in
729 some effectively random order, despite the write barrier issued by CPU 1:
731 +-------+ : : : :
732 | | +------+ +-------+
733 | |------>| A=1 |------ --->| A->0 |
734 | | +------+ \ +-------+
735 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
736 | | +------+ | +-------+
737 | |------>| B=2 |--- | : :
738 | | +------+ \ | : : +-------+
739 +-------+ : : \ | +-------+ | |
740 ---------->| B->2 |------>| |
741 | +-------+ | CPU 2 |
742 | | A->0 |------>| |
743 | +-------+ | |
744 | : : +-------+
745 \ : :
746 \ +-------+
747 ---->| A->1 |
748 +-------+
749 : :
752 If, however, a read barrier were to be placed between the load of B and the
753 load of A on CPU 2:
755 CPU 1 CPU 2
756 ======================= =======================
757 { A = 0, B = 9 }
758 STORE A=1
759 <write barrier>
760 STORE B=2
761 LOAD B
762 <read barrier>
763 LOAD A
765 then the partial ordering imposed by CPU 1 will be perceived correctly by CPU
766 2:
768 +-------+ : : : :
769 | | +------+ +-------+
770 | |------>| A=1 |------ --->| A->0 |
771 | | +------+ \ +-------+
772 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
773 | | +------+ | +-------+
774 | |------>| B=2 |--- | : :
775 | | +------+ \ | : : +-------+
776 +-------+ : : \ | +-------+ | |
777 ---------->| B->2 |------>| |
778 | +-------+ | CPU 2 |
779 | : : | |
780 | : : | |
781 At this point the read ----> \ rrrrrrrrrrrrrrrrr | |
782 barrier causes all effects \ +-------+ | |
783 prior to the storage of B ---->| A->1 |------>| |
784 to be perceptible to CPU 2 +-------+ | |
785 : : +-------+
788 To illustrate this more completely, consider what could happen if the code
789 contained a load of A either side of the read barrier:
791 CPU 1 CPU 2
792 ======================= =======================
793 { A = 0, B = 9 }
794 STORE A=1
795 <write barrier>
796 STORE B=2
797 LOAD B
798 LOAD A [first load of A]
799 <read barrier>
800 LOAD A [second load of A]
802 Even though the two loads of A both occur after the load of B, they may both
803 come up with different values:
805 +-------+ : : : :
806 | | +------+ +-------+
807 | |------>| A=1 |------ --->| A->0 |
808 | | +------+ \ +-------+
809 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
810 | | +------+ | +-------+
811 | |------>| B=2 |--- | : :
812 | | +------+ \ | : : +-------+
813 +-------+ : : \ | +-------+ | |
814 ---------->| B->2 |------>| |
815 | +-------+ | CPU 2 |
816 | : : | |
817 | : : | |
818 | +-------+ | |
819 | | A->0 |------>| 1st |
820 | +-------+ | |
821 At this point the read ----> \ rrrrrrrrrrrrrrrrr | |
822 barrier causes all effects \ +-------+ | |
823 prior to the storage of B ---->| A->1 |------>| 2nd |
824 to be perceptible to CPU 2 +-------+ | |
825 : : +-------+
828 But it may be that the update to A from CPU 1 becomes perceptible to CPU 2
829 before the read barrier completes anyway:
831 +-------+ : : : :
832 | | +------+ +-------+
833 | |------>| A=1 |------ --->| A->0 |
834 | | +------+ \ +-------+
835 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
836 | | +------+ | +-------+
837 | |------>| B=2 |--- | : :
838 | | +------+ \ | : : +-------+
839 +-------+ : : \ | +-------+ | |
840 ---------->| B->2 |------>| |
841 | +-------+ | CPU 2 |
842 | : : | |
843 \ : : | |
844 \ +-------+ | |
845 ---->| A->1 |------>| 1st |
846 +-------+ | |
847 rrrrrrrrrrrrrrrrr | |
848 +-------+ | |
849 | A->1 |------>| 2nd |
850 +-------+ | |
851 : : +-------+
854 The guarantee is that the second load will always come up with A == 1 if the
855 load of B came up with B == 2. No such guarantee exists for the first load of
856 A; that may come up with either A == 0 or A == 1.
859 READ MEMORY BARRIERS VS LOAD SPECULATION
860 ----------------------------------------
862 Many CPUs speculate with loads: that is they see that they will need to load an
863 item from memory, and they find a time where they're not using the bus for any
864 other loads, and so do the load in advance - even though they haven't actually
865 got to that point in the instruction execution flow yet. This permits the
866 actual load instruction to potentially complete immediately because the CPU
867 already has the value to hand.
869 It may turn out that the CPU didn't actually need the value - perhaps because a
870 branch circumvented the load - in which case it can discard the value or just
871 cache it for later use.
873 Consider:
875 CPU 1 CPU 2
876 ======================= =======================
877 LOAD B
878 DIVIDE } Divide instructions generally
879 DIVIDE } take a long time to perform
880 LOAD A
882 Which might appear as this:
884 : : +-------+
885 +-------+ | |
886 --->| B->2 |------>| |
887 +-------+ | CPU 2 |
888 : :DIVIDE | |
889 +-------+ | |
890 The CPU being busy doing a ---> --->| A->0 |~~~~ | |
891 division speculates on the +-------+ ~ | |
892 LOAD of A : : ~ | |
893 : :DIVIDE | |
894 : : ~ | |
895 Once the divisions are complete --> : : ~-->| |
896 the CPU can then perform the : : | |
897 LOAD with immediate effect : : +-------+
900 Placing a read barrier or a data dependency barrier just before the second
901 load:
903 CPU 1 CPU 2
904 ======================= =======================
905 LOAD B
906 DIVIDE
907 DIVIDE
908 <read barrier>
909 LOAD A
911 will force any value speculatively obtained to be reconsidered to an extent
912 dependent on the type of barrier used. If there was no change made to the
913 speculated memory location, then the speculated value will just be used:
915 : : +-------+
916 +-------+ | |
917 --->| B->2 |------>| |
918 +-------+ | CPU 2 |
919 : :DIVIDE | |
920 +-------+ | |
921 The CPU being busy doing a ---> --->| A->0 |~~~~ | |
922 division speculates on the +-------+ ~ | |
923 LOAD of A : : ~ | |
924 : :DIVIDE | |
925 : : ~ | |
926 : : ~ | |
927 rrrrrrrrrrrrrrrr~ | |
928 : : ~ | |
929 : : ~-->| |
930 : : | |
931 : : +-------+
934 but if there was an update or an invalidation from another CPU pending, then
935 the speculation will be cancelled and the value reloaded:
937 : : +-------+
938 +-------+ | |
939 --->| B->2 |------>| |
940 +-------+ | CPU 2 |
941 : :DIVIDE | |
942 +-------+ | |
943 The CPU being busy doing a ---> --->| A->0 |~~~~ | |
944 division speculates on the +-------+ ~ | |
945 LOAD of A : : ~ | |
946 : :DIVIDE | |
947 : : ~ | |
948 : : ~ | |
949 rrrrrrrrrrrrrrrrr | |
950 +-------+ | |
951 The speculation is discarded ---> --->| A->1 |------>| |
952 and an updated value is +-------+ | |
953 retrieved : : +-------+
956 ========================
957 EXPLICIT KERNEL BARRIERS
958 ========================
960 The Linux kernel has a variety of different barriers that act at different
961 levels:
963 (*) Compiler barrier.
965 (*) CPU memory barriers.
967 (*) MMIO write barrier.
970 COMPILER BARRIER
971 ----------------
973 The Linux kernel has an explicit compiler barrier function that prevents the
974 compiler from moving the memory accesses either side of it to the other side:
976 barrier();
978 This a general barrier - lesser varieties of compiler barrier do not exist.
980 The compiler barrier has no direct effect on the CPU, which may then reorder
981 things however it wishes.
984 CPU MEMORY BARRIERS
985 -------------------
987 The Linux kernel has eight basic CPU memory barriers:
989 TYPE MANDATORY SMP CONDITIONAL
990 =============== ======================= ===========================
991 GENERAL mb() smp_mb()
992 WRITE wmb() smp_wmb()
993 READ rmb() smp_rmb()
994 DATA DEPENDENCY read_barrier_depends() smp_read_barrier_depends()
997 All CPU memory barriers unconditionally imply compiler barriers.
999 SMP memory barriers are reduced to compiler barriers on uniprocessor compiled
1000 systems because it is assumed that a CPU will be appear to be self-consistent,
1001 and will order overlapping accesses correctly with respect to itself.
1003 [!] Note that SMP memory barriers _must_ be used to control the ordering of
1004 references to shared memory on SMP systems, though the use of locking instead
1005 is sufficient.
1007 Mandatory barriers should not be used to control SMP effects, since mandatory
1008 barriers unnecessarily impose overhead on UP systems. They may, however, be
1009 used to control MMIO effects on accesses through relaxed memory I/O windows.
1010 These are required even on non-SMP systems as they affect the order in which
1011 memory operations appear to a device by prohibiting both the compiler and the
1012 CPU from reordering them.
1015 There are some more advanced barrier functions:
1017 (*) set_mb(var, value)
1019 This assigns the value to the variable and then inserts at least a write
1020 barrier after it, depending on the function. It isn't guaranteed to
1021 insert anything more than a compiler barrier in a UP compilation.
1024 (*) smp_mb__before_atomic_dec();
1025 (*) smp_mb__after_atomic_dec();
1026 (*) smp_mb__before_atomic_inc();
1027 (*) smp_mb__after_atomic_inc();
1029 These are for use with atomic add, subtract, increment and decrement
1030 functions that don't return a value, especially when used for reference
1031 counting. These functions do not imply memory barriers.
1033 As an example, consider a piece of code that marks an object as being dead
1034 and then decrements the object's reference count:
1036 obj->dead = 1;
1037 smp_mb__before_atomic_dec();
1038 atomic_dec(&obj->ref_count);
1040 This makes sure that the death mark on the object is perceived to be set
1041 *before* the reference counter is decremented.
1043 See Documentation/atomic_ops.txt for more information. See the "Atomic
1044 operations" subsection for information on where to use these.
1047 (*) smp_mb__before_clear_bit(void);
1048 (*) smp_mb__after_clear_bit(void);
1050 These are for use similar to the atomic inc/dec barriers. These are
1051 typically used for bitwise unlocking operations, so care must be taken as
1052 there are no implicit memory barriers here either.
1054 Consider implementing an unlock operation of some nature by clearing a
1055 locking bit. The clear_bit() would then need to be barriered like this:
1057 smp_mb__before_clear_bit();
1058 clear_bit( ... );
1060 This prevents memory operations before the clear leaking to after it. See
1061 the subsection on "Locking Functions" with reference to UNLOCK operation
1062 implications.
1064 See Documentation/atomic_ops.txt for more information. See the "Atomic
1065 operations" subsection for information on where to use these.
1068 MMIO WRITE BARRIER
1069 ------------------
1071 The Linux kernel also has a special barrier for use with memory-mapped I/O
1072 writes:
1074 mmiowb();
1076 This is a variation on the mandatory write barrier that causes writes to weakly
1077 ordered I/O regions to be partially ordered. Its effects may go beyond the
1078 CPU->Hardware interface and actually affect the hardware at some level.
1080 See the subsection "Locks vs I/O accesses" for more information.
1083 ===============================
1084 IMPLICIT KERNEL MEMORY BARRIERS
1085 ===============================
1087 Some of the other functions in the linux kernel imply memory barriers, amongst
1088 which are locking and scheduling functions.
1090 This specification is a _minimum_ guarantee; any particular architecture may
1091 provide more substantial guarantees, but these may not be relied upon outside
1092 of arch specific code.
1095 LOCKING FUNCTIONS
1096 -----------------
1098 The Linux kernel has a number of locking constructs:
1100 (*) spin locks
1101 (*) R/W spin locks
1102 (*) mutexes
1103 (*) semaphores
1104 (*) R/W semaphores
1105 (*) RCU
1107 In all cases there are variants on "LOCK" operations and "UNLOCK" operations
1108 for each construct. These operations all imply certain barriers:
1110 (1) LOCK operation implication:
1112 Memory operations issued after the LOCK will be completed after the LOCK
1113 operation has completed.
1115 Memory operations issued before the LOCK may be completed after the LOCK
1116 operation has completed.
1118 (2) UNLOCK operation implication:
1120 Memory operations issued before the UNLOCK will be completed before the
1121 UNLOCK operation has completed.
1123 Memory operations issued after the UNLOCK may be completed before the
1124 UNLOCK operation has completed.
1126 (3) LOCK vs LOCK implication:
1128 All LOCK operations issued before another LOCK operation will be completed
1129 before that LOCK operation.
1131 (4) LOCK vs UNLOCK implication:
1133 All LOCK operations issued before an UNLOCK operation will be completed
1134 before the UNLOCK operation.
1136 All UNLOCK operations issued before a LOCK operation will be completed
1137 before the LOCK operation.
1139 (5) Failed conditional LOCK implication:
1141 Certain variants of the LOCK operation may fail, either due to being
1142 unable to get the lock immediately, or due to receiving an unblocked
1143 signal whilst asleep waiting for the lock to become available. Failed
1144 locks do not imply any sort of barrier.
1146 Therefore, from (1), (2) and (4) an UNLOCK followed by an unconditional LOCK is
1147 equivalent to a full barrier, but a LOCK followed by an UNLOCK is not.
1149 [!] Note: one of the consequence of LOCKs and UNLOCKs being only one-way
1150 barriers is that the effects instructions outside of a critical section may
1151 seep into the inside of the critical section.
1153 A LOCK followed by an UNLOCK may not be assumed to be full memory barrier
1154 because it is possible for an access preceding the LOCK to happen after the
1155 LOCK, and an access following the UNLOCK to happen before the UNLOCK, and the
1156 two accesses can themselves then cross:
1158 *A = a;
1159 LOCK
1160 UNLOCK
1161 *B = b;
1163 may occur as:
1165 LOCK, STORE *B, STORE *A, UNLOCK
1167 Locks and semaphores may not provide any guarantee of ordering on UP compiled
1168 systems, and so cannot be counted on in such a situation to actually achieve
1169 anything at all - especially with respect to I/O accesses - unless combined
1170 with interrupt disabling operations.
1172 See also the section on "Inter-CPU locking barrier effects".
1175 As an example, consider the following:
1177 *A = a;
1178 *B = b;
1179 LOCK
1180 *C = c;
1181 *D = d;
1182 UNLOCK
1183 *E = e;
1184 *F = f;
1186 The following sequence of events is acceptable:
1188 LOCK, {*F,*A}, *E, {*C,*D}, *B, UNLOCK
1190 [+] Note that {*F,*A} indicates a combined access.
1192 But none of the following are:
1194 {*F,*A}, *B, LOCK, *C, *D, UNLOCK, *E
1195 *A, *B, *C, LOCK, *D, UNLOCK, *E, *F
1196 *A, *B, LOCK, *C, UNLOCK, *D, *E, *F
1197 *B, LOCK, *C, *D, UNLOCK, {*F,*A}, *E
1201 INTERRUPT DISABLING FUNCTIONS
1202 -----------------------------
1204 Functions that disable interrupts (LOCK equivalent) and enable interrupts
1205 (UNLOCK equivalent) will act as compiler barriers only. So if memory or I/O
1206 barriers are required in such a situation, they must be provided from some
1207 other means.
1210 MISCELLANEOUS FUNCTIONS
1211 -----------------------
1213 Other functions that imply barriers:
1215 (*) schedule() and similar imply full memory barriers.
1218 =================================
1219 INTER-CPU LOCKING BARRIER EFFECTS
1220 =================================
1222 On SMP systems locking primitives give a more substantial form of barrier: one
1223 that does affect memory access ordering on other CPUs, within the context of
1224 conflict on any particular lock.
1227 LOCKS VS MEMORY ACCESSES
1228 ------------------------
1230 Consider the following: the system has a pair of spinlocks (M) and (Q), and
1231 three CPUs; then should the following sequence of events occur:
1233 CPU 1 CPU 2
1234 =============================== ===============================
1235 *A = a; *E = e;
1236 LOCK M LOCK Q
1237 *B = b; *F = f;
1238 *C = c; *G = g;
1239 UNLOCK M UNLOCK Q
1240 *D = d; *H = h;
1242 Then there is no guarantee as to what order CPU #3 will see the accesses to *A
1243 through *H occur in, other than the constraints imposed by the separate locks
1244 on the separate CPUs. It might, for example, see:
1246 *E, LOCK M, LOCK Q, *G, *C, *F, *A, *B, UNLOCK Q, *D, *H, UNLOCK M
1248 But it won't see any of:
1250 *B, *C or *D preceding LOCK M
1251 *A, *B or *C following UNLOCK M
1252 *F, *G or *H preceding LOCK Q
1253 *E, *F or *G following UNLOCK Q
1256 However, if the following occurs:
1258 CPU 1 CPU 2
1259 =============================== ===============================
1260 *A = a;
1261 LOCK M [1]
1262 *B = b;
1263 *C = c;
1264 UNLOCK M [1]
1265 *D = d; *E = e;
1266 LOCK M [2]
1267 *F = f;
1268 *G = g;
1269 UNLOCK M [2]
1270 *H = h;
1272 CPU #3 might see:
1274 *E, LOCK M [1], *C, *B, *A, UNLOCK M [1],
1275 LOCK M [2], *H, *F, *G, UNLOCK M [2], *D
1277 But assuming CPU #1 gets the lock first, it won't see any of:
1279 *B, *C, *D, *F, *G or *H preceding LOCK M [1]
1280 *A, *B or *C following UNLOCK M [1]
1281 *F, *G or *H preceding LOCK M [2]
1282 *A, *B, *C, *E, *F or *G following UNLOCK M [2]
1285 LOCKS VS I/O ACCESSES
1286 ---------------------
1288 Under certain circumstances (especially involving NUMA), I/O accesses within
1289 two spinlocked sections on two different CPUs may be seen as interleaved by the
1290 PCI bridge, because the PCI bridge does not necessarily participate in the
1291 cache-coherence protocol, and is therefore incapable of issuing the required
1292 read memory barriers.
1294 For example:
1296 CPU 1 CPU 2
1297 =============================== ===============================
1298 spin_lock(Q)
1299 writel(0, ADDR)
1300 writel(1, DATA);
1301 spin_unlock(Q);
1302 spin_lock(Q);
1303 writel(4, ADDR);
1304 writel(5, DATA);
1305 spin_unlock(Q);
1307 may be seen by the PCI bridge as follows:
1309 STORE *ADDR = 0, STORE *ADDR = 4, STORE *DATA = 1, STORE *DATA = 5
1311 which would probably cause the hardware to malfunction.
1314 What is necessary here is to intervene with an mmiowb() before dropping the
1315 spinlock, for example:
1317 CPU 1 CPU 2
1318 =============================== ===============================
1319 spin_lock(Q)
1320 writel(0, ADDR)
1321 writel(1, DATA);
1322 mmiowb();
1323 spin_unlock(Q);
1324 spin_lock(Q);
1325 writel(4, ADDR);
1326 writel(5, DATA);
1327 mmiowb();
1328 spin_unlock(Q);
1330 this will ensure that the two stores issued on CPU #1 appear at the PCI bridge
1331 before either of the stores issued on CPU #2.
1334 Furthermore, following a store by a load to the same device obviates the need
1335 for an mmiowb(), because the load forces the store to complete before the load
1336 is performed:
1338 CPU 1 CPU 2
1339 =============================== ===============================
1340 spin_lock(Q)
1341 writel(0, ADDR)
1342 a = readl(DATA);
1343 spin_unlock(Q);
1344 spin_lock(Q);
1345 writel(4, ADDR);
1346 b = readl(DATA);
1347 spin_unlock(Q);
1350 See Documentation/DocBook/deviceiobook.tmpl for more information.
1353 =================================
1354 WHERE ARE MEMORY BARRIERS NEEDED?
1355 =================================
1357 Under normal operation, memory operation reordering is generally not going to
1358 be a problem as a single-threaded linear piece of code will still appear to
1359 work correctly, even if it's in an SMP kernel. There are, however, three
1360 circumstances in which reordering definitely _could_ be a problem:
1362 (*) Interprocessor interaction.
1364 (*) Atomic operations.
1366 (*) Accessing devices (I/O).
1368 (*) Interrupts.
1371 INTERPROCESSOR INTERACTION
1372 --------------------------
1374 When there's a system with more than one processor, more than one CPU in the
1375 system may be working on the same data set at the same time. This can cause
1376 synchronisation problems, and the usual way of dealing with them is to use
1377 locks. Locks, however, are quite expensive, and so it may be preferable to
1378 operate without the use of a lock if at all possible. In such a case
1379 operations that affect both CPUs may have to be carefully ordered to prevent
1380 a malfunction.
1382 Consider, for example, the R/W semaphore slow path. Here a waiting process is
1383 queued on the semaphore, by virtue of it having a piece of its stack linked to
1384 the semaphore's list of waiting processes:
1386 struct rw_semaphore {
1387 ...
1388 spinlock_t lock;
1389 struct list_head waiters;
1390 };
1392 struct rwsem_waiter {
1393 struct list_head list;
1394 struct task_struct *task;
1395 };
1397 To wake up a particular waiter, the up_read() or up_write() functions have to:
1399 (1) read the next pointer from this waiter's record to know as to where the
1400 next waiter record is;
1402 (4) read the pointer to the waiter's task structure;
1404 (3) clear the task pointer to tell the waiter it has been given the semaphore;
1406 (4) call wake_up_process() on the task; and
1408 (5) release the reference held on the waiter's task struct.
1410 In otherwords, it has to perform this sequence of events:
1412 LOAD waiter->list.next;
1413 LOAD waiter->task;
1414 STORE waiter->task;
1415 CALL wakeup
1416 RELEASE task
1418 and if any of these steps occur out of order, then the whole thing may
1419 malfunction.
1421 Once it has queued itself and dropped the semaphore lock, the waiter does not
1422 get the lock again; it instead just waits for its task pointer to be cleared
1423 before proceeding. Since the record is on the waiter's stack, this means that
1424 if the task pointer is cleared _before_ the next pointer in the list is read,
1425 another CPU might start processing the waiter and might clobber the waiter's
1426 stack before the up*() function has a chance to read the next pointer.
1428 Consider then what might happen to the above sequence of events:
1430 CPU 1 CPU 2
1431 =============================== ===============================
1432 down_xxx()
1433 Queue waiter
1434 Sleep
1435 up_yyy()
1436 LOAD waiter->task;
1437 STORE waiter->task;
1438 Woken up by other event
1439 <preempt>
1440 Resume processing
1441 down_xxx() returns
1442 call foo()
1443 foo() clobbers *waiter
1444 </preempt>
1445 LOAD waiter->list.next;
1446 --- OOPS ---
1448 This could be dealt with using the semaphore lock, but then the down_xxx()
1449 function has to needlessly get the spinlock again after being woken up.
1451 The way to deal with this is to insert a general SMP memory barrier:
1453 LOAD waiter->list.next;
1454 LOAD waiter->task;
1455 smp_mb();
1456 STORE waiter->task;
1457 CALL wakeup
1458 RELEASE task
1460 In this case, the barrier makes a guarantee that all memory accesses before the
1461 barrier will appear to happen before all the memory accesses after the barrier
1462 with respect to the other CPUs on the system. It does _not_ guarantee that all
1463 the memory accesses before the barrier will be complete by the time the barrier
1464 instruction itself is complete.
1466 On a UP system - where this wouldn't be a problem - the smp_mb() is just a
1467 compiler barrier, thus making sure the compiler emits the instructions in the
1468 right order without actually intervening in the CPU. Since there's only one
1469 CPU, that CPU's dependency ordering logic will take care of everything else.
1472 ATOMIC OPERATIONS
1473 -----------------
1475 Whilst they are technically interprocessor interaction considerations, atomic
1476 operations are noted specially as some of them imply full memory barriers and
1477 some don't, but they're very heavily relied on as a group throughout the
1478 kernel.
1480 Any atomic operation that modifies some state in memory and returns information
1481 about the state (old or new) implies an SMP-conditional general memory barrier
1482 (smp_mb()) on each side of the actual operation. These include:
1484 xchg();
1485 cmpxchg();
1486 atomic_cmpxchg();
1487 atomic_inc_return();
1488 atomic_dec_return();
1489 atomic_add_return();
1490 atomic_sub_return();
1491 atomic_inc_and_test();
1492 atomic_dec_and_test();
1493 atomic_sub_and_test();
1494 atomic_add_negative();
1495 atomic_add_unless();
1496 test_and_set_bit();
1497 test_and_clear_bit();
1498 test_and_change_bit();
1500 These are used for such things as implementing LOCK-class and UNLOCK-class
1501 operations and adjusting reference counters towards object destruction, and as
1502 such the implicit memory barrier effects are necessary.
1505 The following operation are potential problems as they do _not_ imply memory
1506 barriers, but might be used for implementing such things as UNLOCK-class
1507 operations:
1509 atomic_set();
1510 set_bit();
1511 clear_bit();
1512 change_bit();
1514 With these the appropriate explicit memory barrier should be used if necessary
1515 (smp_mb__before_clear_bit() for instance).
1518 The following also do _not_ imply memory barriers, and so may require explicit
1519 memory barriers under some circumstances (smp_mb__before_atomic_dec() for
1520 instance)):
1522 atomic_add();
1523 atomic_sub();
1524 atomic_inc();
1525 atomic_dec();
1527 If they're used for statistics generation, then they probably don't need memory
1528 barriers, unless there's a coupling between statistical data.
1530 If they're used for reference counting on an object to control its lifetime,
1531 they probably don't need memory barriers because either the reference count
1532 will be adjusted inside a locked section, or the caller will already hold
1533 sufficient references to make the lock, and thus a memory barrier unnecessary.
1535 If they're used for constructing a lock of some description, then they probably
1536 do need memory barriers as a lock primitive generally has to do things in a
1537 specific order.
1540 Basically, each usage case has to be carefully considered as to whether memory
1541 barriers are needed or not.
1543 [!] Note that special memory barrier primitives are available for these
1544 situations because on some CPUs the atomic instructions used imply full memory
1545 barriers, and so barrier instructions are superfluous in conjunction with them,
1546 and in such cases the special barrier primitives will be no-ops.
1548 See Documentation/atomic_ops.txt for more information.
1551 ACCESSING DEVICES
1552 -----------------
1554 Many devices can be memory mapped, and so appear to the CPU as if they're just
1555 a set of memory locations. To control such a device, the driver usually has to
1556 make the right memory accesses in exactly the right order.
1558 However, having a clever CPU or a clever compiler creates a potential problem
1559 in that the carefully sequenced accesses in the driver code won't reach the
1560 device in the requisite order if the CPU or the compiler thinks it is more
1561 efficient to reorder, combine or merge accesses - something that would cause
1562 the device to malfunction.
1564 Inside of the Linux kernel, I/O should be done through the appropriate accessor
1565 routines - such as inb() or writel() - which know how to make such accesses
1566 appropriately sequential. Whilst this, for the most part, renders the explicit
1567 use of memory barriers unnecessary, there are a couple of situations where they
1568 might be needed:
1570 (1) On some systems, I/O stores are not strongly ordered across all CPUs, and
1571 so for _all_ general drivers locks should be used and mmiowb() must be
1572 issued prior to unlocking the critical section.
1574 (2) If the accessor functions are used to refer to an I/O memory window with
1575 relaxed memory access properties, then _mandatory_ memory barriers are
1576 required to enforce ordering.
1578 See Documentation/DocBook/deviceiobook.tmpl for more information.
1581 INTERRUPTS
1582 ----------
1584 A driver may be interrupted by its own interrupt service routine, and thus the
1585 two parts of the driver may interfere with each other's attempts to control or
1586 access the device.
1588 This may be alleviated - at least in part - by disabling local interrupts (a
1589 form of locking), such that the critical operations are all contained within
1590 the interrupt-disabled section in the driver. Whilst the driver's interrupt
1591 routine is executing, the driver's core may not run on the same CPU, and its
1592 interrupt is not permitted to happen again until the current interrupt has been
1593 handled, thus the interrupt handler does not need to lock against that.
1595 However, consider a driver that was talking to an ethernet card that sports an
1596 address register and a data register. If that driver's core talks to the card
1597 under interrupt-disablement and then the driver's interrupt handler is invoked:
1599 LOCAL IRQ DISABLE
1600 writew(ADDR, 3);
1601 writew(DATA, y);
1602 LOCAL IRQ ENABLE
1603 <interrupt>
1604 writew(ADDR, 4);
1605 q = readw(DATA);
1606 </interrupt>
1608 The store to the data register might happen after the second store to the
1609 address register if ordering rules are sufficiently relaxed:
1611 STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA
1614 If ordering rules are relaxed, it must be assumed that accesses done inside an
1615 interrupt disabled section may leak outside of it and may interleave with
1616 accesses performed in an interrupt - and vice versa - unless implicit or
1617 explicit barriers are used.
1619 Normally this won't be a problem because the I/O accesses done inside such
1620 sections will include synchronous load operations on strictly ordered I/O
1621 registers that form implicit I/O barriers. If this isn't sufficient then an
1622 mmiowb() may need to be used explicitly.
1625 A similar situation may occur between an interrupt routine and two routines
1626 running on separate CPUs that communicate with each other. If such a case is
1627 likely, then interrupt-disabling locks should be used to guarantee ordering.
1630 ==========================
1631 KERNEL I/O BARRIER EFFECTS
1632 ==========================
1634 When accessing I/O memory, drivers should use the appropriate accessor
1635 functions:
1637 (*) inX(), outX():
1639 These are intended to talk to I/O space rather than memory space, but
1640 that's primarily a CPU-specific concept. The i386 and x86_64 processors do
1641 indeed have special I/O space access cycles and instructions, but many
1642 CPUs don't have such a concept.
1644 The PCI bus, amongst others, defines an I/O space concept - which on such
1645 CPUs as i386 and x86_64 cpus readily maps to the CPU's concept of I/O
1646 space. However, it may also be mapped as a virtual I/O space in the CPU's
1647 memory map, particularly on those CPUs that don't support alternate I/O
1648 spaces.
1650 Accesses to this space may be fully synchronous (as on i386), but
1651 intermediary bridges (such as the PCI host bridge) may not fully honour
1652 that.
1654 They are guaranteed to be fully ordered with respect to each other.
1656 They are not guaranteed to be fully ordered with respect to other types of
1657 memory and I/O operation.
1659 (*) readX(), writeX():
1661 Whether these are guaranteed to be fully ordered and uncombined with
1662 respect to each other on the issuing CPU depends on the characteristics
1663 defined for the memory window through which they're accessing. On later
1664 i386 architecture machines, for example, this is controlled by way of the
1665 MTRR registers.
1667 Ordinarily, these will be guaranteed to be fully ordered and uncombined,,
1668 provided they're not accessing a prefetchable device.
1670 However, intermediary hardware (such as a PCI bridge) may indulge in
1671 deferral if it so wishes; to flush a store, a load from the same location
1672 is preferred[*], but a load from the same device or from configuration
1673 space should suffice for PCI.
1675 [*] NOTE! attempting to load from the same location as was written to may
1676 cause a malfunction - consider the 16550 Rx/Tx serial registers for
1677 example.
1679 Used with prefetchable I/O memory, an mmiowb() barrier may be required to
1680 force stores to be ordered.
1682 Please refer to the PCI specification for more information on interactions
1683 between PCI transactions.
1685 (*) readX_relaxed()
1687 These are similar to readX(), but are not guaranteed to be ordered in any
1688 way. Be aware that there is no I/O read barrier available.
1690 (*) ioreadX(), iowriteX()
1692 These will perform as appropriate for the type of access they're actually
1693 doing, be it inX()/outX() or readX()/writeX().
1696 ========================================
1697 ASSUMED MINIMUM EXECUTION ORDERING MODEL
1698 ========================================
1700 It has to be assumed that the conceptual CPU is weakly-ordered but that it will
1701 maintain the appearance of program causality with respect to itself. Some CPUs
1702 (such as i386 or x86_64) are more constrained than others (such as powerpc or
1703 frv), and so the most relaxed case (namely DEC Alpha) must be assumed outside
1704 of arch-specific code.
1706 This means that it must be considered that the CPU will execute its instruction
1707 stream in any order it feels like - or even in parallel - provided that if an
1708 instruction in the stream depends on the an earlier instruction, then that
1709 earlier instruction must be sufficiently complete[*] before the later
1710 instruction may proceed; in other words: provided that the appearance of
1711 causality is maintained.
1713 [*] Some instructions have more than one effect - such as changing the
1714 condition codes, changing registers or changing memory - and different
1715 instructions may depend on different effects.
1717 A CPU may also discard any instruction sequence that winds up having no
1718 ultimate effect. For example, if two adjacent instructions both load an
1719 immediate value into the same register, the first may be discarded.
1722 Similarly, it has to be assumed that compiler might reorder the instruction
1723 stream in any way it sees fit, again provided the appearance of causality is
1724 maintained.
1727 ============================
1728 THE EFFECTS OF THE CPU CACHE
1729 ============================
1731 The way cached memory operations are perceived across the system is affected to
1732 a certain extent by the caches that lie between CPUs and memory, and by the
1733 memory coherence system that maintains the consistency of state in the system.
1735 As far as the way a CPU interacts with another part of the system through the
1736 caches goes, the memory system has to include the CPU's caches, and memory
1737 barriers for the most part act at the interface between the CPU and its cache
1738 (memory barriers logically act on the dotted line in the following diagram):
1740 <--- CPU ---> : <----------- Memory ----------->
1742 +--------+ +--------+ : +--------+ +-----------+
1743 | | | | : | | | | +--------+
1744 | CPU | | Memory | : | CPU | | | | |
1745 | Core |--->| Access |----->| Cache |<-->| | | |
1746 | | | Queue | : | | | |--->| Memory |
1747 | | | | : | | | | | |
1748 +--------+ +--------+ : +--------+ | | | |
1749 : | Cache | +--------+
1750 : | Coherency |
1751 : | Mechanism | +--------+
1752 +--------+ +--------+ : +--------+ | | | |
1753 | | | | : | | | | | |
1754 | CPU | | Memory | : | CPU | | |--->| Device |
1755 | Core |--->| Access |----->| Cache |<-->| | | |
1756 | | | Queue | : | | | | | |
1757 | | | | : | | | | +--------+
1758 +--------+ +--------+ : +--------+ +-----------+
1762 Although any particular load or store may not actually appear outside of the
1763 CPU that issued it since it may have been satisfied within the CPU's own cache,
1764 it will still appear as if the full memory access had taken place as far as the
1765 other CPUs are concerned since the cache coherency mechanisms will migrate the
1766 cacheline over to the accessing CPU and propagate the effects upon conflict.
1768 The CPU core may execute instructions in any order it deems fit, provided the
1769 expected program causality appears to be maintained. Some of the instructions
1770 generate load and store operations which then go into the queue of memory
1771 accesses to be performed. The core may place these in the queue in any order
1772 it wishes, and continue execution until it is forced to wait for an instruction
1773 to complete.
1775 What memory barriers are concerned with is controlling the order in which
1776 accesses cross from the CPU side of things to the memory side of things, and
1777 the order in which the effects are perceived to happen by the other observers
1778 in the system.
1780 [!] Memory barriers are _not_ needed within a given CPU, as CPUs always see
1781 their own loads and stores as if they had happened in program order.
1783 [!] MMIO or other device accesses may bypass the cache system. This depends on
1784 the properties of the memory window through which devices are accessed and/or
1785 the use of any special device communication instructions the CPU may have.
1788 CACHE COHERENCY
1789 ---------------
1791 Life isn't quite as simple as it may appear above, however: for while the
1792 caches are expected to be coherent, there's no guarantee that that coherency
1793 will be ordered. This means that whilst changes made on one CPU will
1794 eventually become visible on all CPUs, there's no guarantee that they will
1795 become apparent in the same order on those other CPUs.
1798 Consider dealing with a system that has pair of CPUs (1 & 2), each of which has
1799 a pair of parallel data caches (CPU 1 has A/B, and CPU 2 has C/D):
1802 : +--------+
1803 : +---------+ | |
1804 +--------+ : +--->| Cache A |<------->| |
1805 | | : | +---------+ | |
1806 | CPU 1 |<---+ | |
1807 | | : | +---------+ | |
1808 +--------+ : +--->| Cache B |<------->| |
1809 : +---------+ | |
1810 : | Memory |
1811 : +---------+ | System |
1812 +--------+ : +--->| Cache C |<------->| |
1813 | | : | +---------+ | |
1814 | CPU 2 |<---+ | |
1815 | | : | +---------+ | |
1816 +--------+ : +--->| Cache D |<------->| |
1817 : +---------+ | |
1818 : +--------+
1821 Imagine the system has the following properties:
1823 (*) an odd-numbered cache line may be in cache A, cache C or it may still be
1824 resident in memory;
1826 (*) an even-numbered cache line may be in cache B, cache D or it may still be
1827 resident in memory;
1829 (*) whilst the CPU core is interrogating one cache, the other cache may be
1830 making use of the bus to access the rest of the system - perhaps to
1831 displace a dirty cacheline or to do a speculative load;
1833 (*) each cache has a queue of operations that need to be applied to that cache
1834 to maintain coherency with the rest of the system;
1836 (*) the coherency queue is not flushed by normal loads to lines already
1837 present in the cache, even though the contents of the queue may
1838 potentially effect those loads.
1840 Imagine, then, that two writes are made on the first CPU, with a write barrier
1841 between them to guarantee that they will appear to reach that CPU's caches in
1842 the requisite order:
1844 CPU 1 CPU 2 COMMENT
1845 =============== =============== =======================================
1846 u == 0, v == 1 and p == &u, q == &u
1847 v = 2;
1848 smp_wmb(); Make sure change to v visible before
1849 change to p
1850 <A:modify v=2> v is now in cache A exclusively
1851 p = &v;
1852 <B:modify p=&v> p is now in cache B exclusively
1854 The write memory barrier forces the other CPUs in the system to perceive that
1855 the local CPU's caches have apparently been updated in the correct order. But
1856 now imagine that the second CPU that wants to read those values:
1858 CPU 1 CPU 2 COMMENT
1859 =============== =============== =======================================
1860 ...
1861 q = p;
1862 x = *q;
1864 The above pair of reads may then fail to happen in expected order, as the
1865 cacheline holding p may get updated in one of the second CPU's caches whilst
1866 the update to the cacheline holding v is delayed in the other of the second
1867 CPU's caches by some other cache event:
1869 CPU 1 CPU 2 COMMENT
1870 =============== =============== =======================================
1871 u == 0, v == 1 and p == &u, q == &u
1872 v = 2;
1873 smp_wmb();
1874 <A:modify v=2> <C:busy>
1875 <C:queue v=2>
1876 p = &v; q = p;
1877 <D:request p>
1878 <B:modify p=&v> <D:commit p=&v>
1879 <D:read p>
1880 x = *q;
1881 <C:read *q> Reads from v before v updated in cache
1882 <C:unbusy>
1883 <C:commit v=2>
1885 Basically, whilst both cachelines will be updated on CPU 2 eventually, there's
1886 no guarantee that, without intervention, the order of update will be the same
1887 as that committed on CPU 1.
1890 To intervene, we need to interpolate a data dependency barrier or a read
1891 barrier between the loads. This will force the cache to commit its coherency
1892 queue before processing any further requests:
1894 CPU 1 CPU 2 COMMENT
1895 =============== =============== =======================================
1896 u == 0, v == 1 and p == &u, q == &u
1897 v = 2;
1898 smp_wmb();
1899 <A:modify v=2> <C:busy>
1900 <C:queue v=2>
1901 p = &b; q = p;
1902 <D:request p>
1903 <B:modify p=&v> <D:commit p=&v>
1904 <D:read p>
1905 smp_read_barrier_depends()
1906 <C:unbusy>
1907 <C:commit v=2>
1908 x = *q;
1909 <C:read *q> Reads from v after v updated in cache
1912 This sort of problem can be encountered on DEC Alpha processors as they have a
1913 split cache that improves performance by making better use of the data bus.
1914 Whilst most CPUs do imply a data dependency barrier on the read when a memory
1915 access depends on a read, not all do, so it may not be relied on.
1917 Other CPUs may also have split caches, but must coordinate between the various
1918 cachelets for normal memory accesss. The semantics of the Alpha removes the
1919 need for coordination in absence of memory barriers.
1922 CACHE COHERENCY VS DMA
1923 ----------------------
1925 Not all systems maintain cache coherency with respect to devices doing DMA. In
1926 such cases, a device attempting DMA may obtain stale data from RAM because
1927 dirty cache lines may be resident in the caches of various CPUs, and may not
1928 have been written back to RAM yet. To deal with this, the appropriate part of
1929 the kernel must flush the overlapping bits of cache on each CPU (and maybe
1930 invalidate them as well).
1932 In addition, the data DMA'd to RAM by a device may be overwritten by dirty
1933 cache lines being written back to RAM from a CPU's cache after the device has
1934 installed its own data, or cache lines simply present in a CPUs cache may
1935 simply obscure the fact that RAM has been updated, until at such time as the
1936 cacheline is discarded from the CPU's cache and reloaded. To deal with this,
1937 the appropriate part of the kernel must invalidate the overlapping bits of the
1938 cache on each CPU.
1940 See Documentation/cachetlb.txt for more information on cache management.
1943 CACHE COHERENCY VS MMIO
1944 -----------------------
1946 Memory mapped I/O usually takes place through memory locations that are part of
1947 a window in the CPU's memory space that have different properties assigned than
1948 the usual RAM directed window.
1950 Amongst these properties is usually the fact that such accesses bypass the
1951 caching entirely and go directly to the device buses. This means MMIO accesses
1952 may, in effect, overtake accesses to cached memory that were emitted earlier.
1953 A memory barrier isn't sufficient in such a case, but rather the cache must be
1954 flushed between the cached memory write and the MMIO access if the two are in
1955 any way dependent.
1958 =========================
1959 THE THINGS CPUS GET UP TO
1960 =========================
1962 A programmer might take it for granted that the CPU will perform memory
1963 operations in exactly the order specified, so that if a CPU is, for example,
1964 given the following piece of code to execute:
1966 a = *A;
1967 *B = b;
1968 c = *C;
1969 d = *D;
1970 *E = e;
1972 They would then expect that the CPU will complete the memory operation for each
1973 instruction before moving on to the next one, leading to a definite sequence of
1974 operations as seen by external observers in the system:
1976 LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E.
1979 Reality is, of course, much messier. With many CPUs and compilers, the above
1980 assumption doesn't hold because:
1982 (*) loads are more likely to need to be completed immediately to permit
1983 execution progress, whereas stores can often be deferred without a
1984 problem;
1986 (*) loads may be done speculatively, and the result discarded should it prove
1987 to have been unnecessary;
1989 (*) loads may be done speculatively, leading to the result having being
1990 fetched at the wrong time in the expected sequence of events;
1992 (*) the order of the memory accesses may be rearranged to promote better use
1993 of the CPU buses and caches;
1995 (*) loads and stores may be combined to improve performance when talking to
1996 memory or I/O hardware that can do batched accesses of adjacent locations,
1997 thus cutting down on transaction setup costs (memory and PCI devices may
1998 both be able to do this); and
2000 (*) the CPU's data cache may affect the ordering, and whilst cache-coherency
2001 mechanisms may alleviate this - once the store has actually hit the cache
2002 - there's no guarantee that the coherency management will be propagated in
2003 order to other CPUs.
2005 So what another CPU, say, might actually observe from the above piece of code
2006 is:
2008 LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B
2010 (Where "LOAD {*C,*D}" is a combined load)
2013 However, it is guaranteed that a CPU will be self-consistent: it will see its
2014 _own_ accesses appear to be correctly ordered, without the need for a memory
2015 barrier. For instance with the following code:
2017 U = *A;
2018 *A = V;
2019 *A = W;
2020 X = *A;
2021 *A = Y;
2022 Z = *A;
2024 and assuming no intervention by an external influence, it can be assumed that
2025 the final result will appear to be:
2027 U == the original value of *A
2028 X == W
2029 Z == Y
2030 *A == Y
2032 The code above may cause the CPU to generate the full sequence of memory
2033 accesses:
2035 U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A
2037 in that order, but, without intervention, the sequence may have almost any
2038 combination of elements combined or discarded, provided the program's view of
2039 the world remains consistent.
2041 The compiler may also combine, discard or defer elements of the sequence before
2042 the CPU even sees them.
2044 For instance:
2046 *A = V;
2047 *A = W;
2049 may be reduced to:
2051 *A = W;
2053 since, without a write barrier, it can be assumed that the effect of the
2054 storage of V to *A is lost. Similarly:
2056 *A = Y;
2057 Z = *A;
2059 may, without a memory barrier, be reduced to:
2061 *A = Y;
2062 Z = Y;
2064 and the LOAD operation never appear outside of the CPU.
2067 AND THEN THERE'S THE ALPHA
2068 --------------------------
2070 The DEC Alpha CPU is one of the most relaxed CPUs there is. Not only that,
2071 some versions of the Alpha CPU have a split data cache, permitting them to have
2072 two semantically related cache lines updating at separate times. This is where
2073 the data dependency barrier really becomes necessary as this synchronises both
2074 caches with the memory coherence system, thus making it seem like pointer
2075 changes vs new data occur in the right order.
2077 The Alpha defines the Linux's kernel's memory barrier model.
2079 See the subsection on "Cache Coherency" above.
2082 ==========
2083 REFERENCES
2084 ==========
2086 Alpha AXP Architecture Reference Manual, Second Edition (Sites & Witek,
2087 Digital Press)
2088 Chapter 5.2: Physical Address Space Characteristics
2089 Chapter 5.4: Caches and Write Buffers
2090 Chapter 5.5: Data Sharing
2091 Chapter 5.6: Read/Write Ordering
2093 AMD64 Architecture Programmer's Manual Volume 2: System Programming
2094 Chapter 7.1: Memory-Access Ordering
2095 Chapter 7.4: Buffering and Combining Memory Writes
2097 IA-32 Intel Architecture Software Developer's Manual, Volume 3:
2098 System Programming Guide
2099 Chapter 7.1: Locked Atomic Operations
2100 Chapter 7.2: Memory Ordering
2101 Chapter 7.4: Serializing Instructions
2103 The SPARC Architecture Manual, Version 9
2104 Chapter 8: Memory Models
2105 Appendix D: Formal Specification of the Memory Models
2106 Appendix J: Programming with the Memory Models
2108 UltraSPARC Programmer Reference Manual
2109 Chapter 5: Memory Accesses and Cacheability
2110 Chapter 15: Sparc-V9 Memory Models
2112 UltraSPARC III Cu User's Manual
2113 Chapter 9: Memory Models
2115 UltraSPARC IIIi Processor User's Manual
2116 Chapter 8: Memory Models
2118 UltraSPARC Architecture 2005
2119 Chapter 9: Memory
2120 Appendix D: Formal Specifications of the Memory Models
2122 UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005
2123 Chapter 8: Memory Models
2124 Appendix F: Caches and Cache Coherency
2126 Solaris Internals, Core Kernel Architecture, p63-68:
2127 Chapter 3.3: Hardware Considerations for Locks and
2128 Synchronization
2130 Unix Systems for Modern Architectures, Symmetric Multiprocessing and Caching
2131 for Kernel Programmers:
2132 Chapter 13: Other Memory Models
2134 Intel Itanium Architecture Software Developer's Manual: Volume 1:
2135 Section 2.6: Speculation
2136 Section 4.4: Memory Access