move acknowledgment for Mark Adler to CREDITS
[GitHub/mt8127/android_kernel_alcatel_ttab.git] / Documentation / atomic_ops.txt
CommitLineData
1da177e4
LT
1 Semantics and Behavior of Atomic and
2 Bitmask Operations
3
4 David S. Miller
5
6 This document is intended to serve as a guide to Linux port
7maintainers on how to implement atomic counter, bitops, and spinlock
8interfaces properly.
9
10 The atomic_t type should be defined as a signed integer.
11Also, it should be made opaque such that any kind of cast to a normal
12C integer type will fail. Something like the following should
13suffice:
14
15 typedef struct { volatile int counter; } atomic_t;
16
17 The first operations to implement for atomic_t's are the
18initializers and plain reads.
19
20 #define ATOMIC_INIT(i) { (i) }
21 #define atomic_set(v, i) ((v)->counter = (i))
22
23The first macro is used in definitions, such as:
24
25static atomic_t my_counter = ATOMIC_INIT(1);
26
27The second interface can be used at runtime, as in:
28
29 struct foo { atomic_t counter; };
30 ...
31
32 struct foo *k;
33
34 k = kmalloc(sizeof(*k), GFP_KERNEL);
35 if (!k)
36 return -ENOMEM;
37 atomic_set(&k->counter, 0);
38
39Next, we have:
40
41 #define atomic_read(v) ((v)->counter)
42
43which simply reads the current value of the counter.
44
45Now, we move onto the actual atomic operation interfaces.
46
47 void atomic_add(int i, atomic_t *v);
48 void atomic_sub(int i, atomic_t *v);
49 void atomic_inc(atomic_t *v);
50 void atomic_dec(atomic_t *v);
51
52These four routines add and subtract integral values to/from the given
53atomic_t value. The first two routines pass explicit integers by
54which to make the adjustment, whereas the latter two use an implicit
55adjustment value of "1".
56
57One very important aspect of these two routines is that they DO NOT
58require any explicit memory barriers. They need only perform the
59atomic_t counter update in an SMP safe manner.
60
61Next, we have:
62
63 int atomic_inc_return(atomic_t *v);
64 int atomic_dec_return(atomic_t *v);
65
66These routines add 1 and subtract 1, respectively, from the given
67atomic_t and return the new counter value after the operation is
68performed.
69
70Unlike the above routines, it is required that explicit memory
71barriers are performed before and after the operation. It must be
72done such that all memory operations before and after the atomic
73operation calls are strongly ordered with respect to the atomic
74operation itself.
75
76For example, it should behave as if a smp_mb() call existed both
77before and after the atomic operation.
78
79If the atomic instructions used in an implementation provide explicit
80memory barrier semantics which satisfy the above requirements, that is
81fine as well.
82
83Let's move on:
84
85 int atomic_add_return(int i, atomic_t *v);
86 int atomic_sub_return(int i, atomic_t *v);
87
88These behave just like atomic_{inc,dec}_return() except that an
89explicit counter adjustment is given instead of the implicit "1".
90This means that like atomic_{inc,dec}_return(), the memory barrier
91semantics are required.
92
93Next:
94
95 int atomic_inc_and_test(atomic_t *v);
96 int atomic_dec_and_test(atomic_t *v);
97
98These two routines increment and decrement by 1, respectively, the
99given atomic counter. They return a boolean indicating whether the
100resulting counter value was zero or not.
101
102It requires explicit memory barrier semantics around the operation as
103above.
104
105 int atomic_sub_and_test(int i, atomic_t *v);
106
107This is identical to atomic_dec_and_test() except that an explicit
108decrement is given instead of the implicit "1". It requires explicit
109memory barrier semantics around the operation.
110
111 int atomic_add_negative(int i, atomic_t *v);
112
113The given increment is added to the given atomic counter value. A
114boolean is return which indicates whether the resulting counter value
115is negative. It requires explicit memory barrier semantics around the
116operation.
117
8426e1f6 118Then:
4a6dae6d
NP
119
120 int atomic_cmpxchg(atomic_t *v, int old, int new);
121
122This performs an atomic compare exchange operation on the atomic value v,
123with the given old and new values. Like all atomic_xxx operations,
124atomic_cmpxchg will only satisfy its atomicity semantics as long as all
125other accesses of *v are performed through atomic_xxx operations.
126
127atomic_cmpxchg requires explicit memory barriers around the operation.
128
129The semantics for atomic_cmpxchg are the same as those defined for 'cas'
130below.
131
8426e1f6
NP
132Finally:
133
134 int atomic_add_unless(atomic_t *v, int a, int u);
135
136If the atomic value v is not equal to u, this function adds a to v, and
137returns non zero. If v is equal to u then it returns zero. This is done as
138an atomic operation.
139
140atomic_add_unless requires explicit memory barriers around the operation.
141
142atomic_inc_not_zero, equivalent to atomic_add_unless(v, 1, 0)
143
4a6dae6d 144
1da177e4
LT
145If a caller requires memory barrier semantics around an atomic_t
146operation which does not return a value, a set of interfaces are
147defined which accomplish this:
148
149 void smp_mb__before_atomic_dec(void);
150 void smp_mb__after_atomic_dec(void);
151 void smp_mb__before_atomic_inc(void);
152 void smp_mb__after_atomic_dec(void);
153
154For example, smp_mb__before_atomic_dec() can be used like so:
155
156 obj->dead = 1;
157 smp_mb__before_atomic_dec();
158 atomic_dec(&obj->ref_count);
159
160It makes sure that all memory operations preceeding the atomic_dec()
161call are strongly ordered with respect to the atomic counter
162operation. In the above example, it guarentees that the assignment of
163"1" to obj->dead will be globally visible to other cpus before the
164atomic counter decrement.
165
166Without the explicitl smp_mb__before_atomic_dec() call, the
167implementation could legally allow the atomic counter update visible
168to other cpus before the "obj->dead = 1;" assignment.
169
170The other three interfaces listed are used to provide explicit
171ordering with respect to memory operations after an atomic_dec() call
172(smp_mb__after_atomic_dec()) and around atomic_inc() calls
173(smp_mb__{before,after}_atomic_inc()).
174
175A missing memory barrier in the cases where they are required by the
176atomic_t implementation above can have disasterous results. Here is
177an example, which follows a pattern occuring frequently in the Linux
178kernel. It is the use of atomic counters to implement reference
179counting, and it works such that once the counter falls to zero it can
180be guarenteed that no other entity can be accessing the object:
181
182static void obj_list_add(struct obj *obj)
183{
184 obj->active = 1;
185 list_add(&obj->list);
186}
187
188static void obj_list_del(struct obj *obj)
189{
190 list_del(&obj->list);
191 obj->active = 0;
192}
193
194static void obj_destroy(struct obj *obj)
195{
196 BUG_ON(obj->active);
197 kfree(obj);
198}
199
200struct obj *obj_list_peek(struct list_head *head)
201{
202 if (!list_empty(head)) {
203 struct obj *obj;
204
205 obj = list_entry(head->next, struct obj, list);
206 atomic_inc(&obj->refcnt);
207 return obj;
208 }
209 return NULL;
210}
211
212void obj_poke(void)
213{
214 struct obj *obj;
215
216 spin_lock(&global_list_lock);
217 obj = obj_list_peek(&global_list);
218 spin_unlock(&global_list_lock);
219
220 if (obj) {
221 obj->ops->poke(obj);
222 if (atomic_dec_and_test(&obj->refcnt))
223 obj_destroy(obj);
224 }
225}
226
227void obj_timeout(struct obj *obj)
228{
229 spin_lock(&global_list_lock);
230 obj_list_del(obj);
231 spin_unlock(&global_list_lock);
232
233 if (atomic_dec_and_test(&obj->refcnt))
234 obj_destroy(obj);
235}
236
237(This is a simplification of the ARP queue management in the
238 generic neighbour discover code of the networking. Olaf Kirch
239 found a bug wrt. memory barriers in kfree_skb() that exposed
240 the atomic_t memory barrier requirements quite clearly.)
241
242Given the above scheme, it must be the case that the obj->active
243update done by the obj list deletion be visible to other processors
244before the atomic counter decrement is performed.
245
246Otherwise, the counter could fall to zero, yet obj->active would still
247be set, thus triggering the assertion in obj_destroy(). The error
248sequence looks like this:
249
250 cpu 0 cpu 1
251 obj_poke() obj_timeout()
252 obj = obj_list_peek();
253 ... gains ref to obj, refcnt=2
254 obj_list_del(obj);
255 obj->active = 0 ...
256 ... visibility delayed ...
257 atomic_dec_and_test()
258 ... refcnt drops to 1 ...
259 atomic_dec_and_test()
260 ... refcount drops to 0 ...
261 obj_destroy()
262 BUG() triggers since obj->active
263 still seen as one
264 obj->active update visibility occurs
265
266With the memory barrier semantics required of the atomic_t operations
267which return values, the above sequence of memory visibility can never
268happen. Specifically, in the above case the atomic_dec_and_test()
269counter decrement would not become globally visible until the
270obj->active update does.
271
272As a historical note, 32-bit Sparc used to only allow usage of
27324-bits of it's atomic_t type. This was because it used 8 bits
274as a spinlock for SMP safety. Sparc32 lacked a "compare and swap"
275type instruction. However, 32-bit Sparc has since been moved over
276to a "hash table of spinlocks" scheme, that allows the full 32-bit
277counter to be realized. Essentially, an array of spinlocks are
278indexed into based upon the address of the atomic_t being operated
279on, and that lock protects the atomic operation. Parisc uses the
280same scheme.
281
282Another note is that the atomic_t operations returning values are
283extremely slow on an old 386.
284
285We will now cover the atomic bitmask operations. You will find that
286their SMP and memory barrier semantics are similar in shape and scope
287to the atomic_t ops above.
288
289Native atomic bit operations are defined to operate on objects aligned
290to the size of an "unsigned long" C data type, and are least of that
291size. The endianness of the bits within each "unsigned long" are the
292native endianness of the cpu.
293
294 void set_bit(unsigned long nr, volatils unsigned long *addr);
295 void clear_bit(unsigned long nr, volatils unsigned long *addr);
296 void change_bit(unsigned long nr, volatils unsigned long *addr);
297
298These routines set, clear, and change, respectively, the bit number
299indicated by "nr" on the bit mask pointed to by "ADDR".
300
301They must execute atomically, yet there are no implicit memory barrier
302semantics required of these interfaces.
303
304 int test_and_set_bit(unsigned long nr, volatils unsigned long *addr);
305 int test_and_clear_bit(unsigned long nr, volatils unsigned long *addr);
306 int test_and_change_bit(unsigned long nr, volatils unsigned long *addr);
307
308Like the above, except that these routines return a boolean which
309indicates whether the changed bit was set _BEFORE_ the atomic bit
310operation.
311
312WARNING! It is incredibly important that the value be a boolean,
313ie. "0" or "1". Do not try to be fancy and save a few instructions by
314declaring the above to return "long" and just returning something like
315"old_val & mask" because that will not work.
316
317For one thing, this return value gets truncated to int in many code
318paths using these interfaces, so on 64-bit if the bit is set in the
319upper 32-bits then testers will never see that.
320
321One great example of where this problem crops up are the thread_info
322flag operations. Routines such as test_and_set_ti_thread_flag() chop
323the return value into an int. There are other places where things
324like this occur as well.
325
326These routines, like the atomic_t counter operations returning values,
327require explicit memory barrier semantics around their execution. All
328memory operations before the atomic bit operation call must be made
329visible globally before the atomic bit operation is made visible.
330Likewise, the atomic bit operation must be visible globally before any
331subsequent memory operation is made visible. For example:
332
333 obj->dead = 1;
334 if (test_and_set_bit(0, &obj->flags))
335 /* ... */;
336 obj->killed = 1;
337
338The implementation of test_and_set_bit() must guarentee that
339"obj->dead = 1;" is visible to cpus before the atomic memory operation
340done by test_and_set_bit() becomes visible. Likewise, the atomic
341memory operation done by test_and_set_bit() must become visible before
342"obj->killed = 1;" is visible.
343
344Finally there is the basic operation:
345
346 int test_bit(unsigned long nr, __const__ volatile unsigned long *addr);
347
348Which returns a boolean indicating if bit "nr" is set in the bitmask
349pointed to by "addr".
350
351If explicit memory barriers are required around clear_bit() (which
352does not return a value, and thus does not need to provide memory
353barrier semantics), two interfaces are provided:
354
355 void smp_mb__before_clear_bit(void);
356 void smp_mb__after_clear_bit(void);
357
358They are used as follows, and are akin to their atomic_t operation
359brothers:
360
361 /* All memory operations before this call will
362 * be globally visible before the clear_bit().
363 */
364 smp_mb__before_clear_bit();
365 clear_bit( ... );
366
367 /* The clear_bit() will be visible before all
368 * subsequent memory operations.
369 */
370 smp_mb__after_clear_bit();
371
372Finally, there are non-atomic versions of the bitmask operations
373provided. They are used in contexts where some other higher-level SMP
374locking scheme is being used to protect the bitmask, and thus less
375expensive non-atomic operations may be used in the implementation.
376They have names similar to the above bitmask operation interfaces,
377except that two underscores are prefixed to the interface name.
378
379 void __set_bit(unsigned long nr, volatile unsigned long *addr);
380 void __clear_bit(unsigned long nr, volatile unsigned long *addr);
381 void __change_bit(unsigned long nr, volatile unsigned long *addr);
382 int __test_and_set_bit(unsigned long nr, volatile unsigned long *addr);
383 int __test_and_clear_bit(unsigned long nr, volatile unsigned long *addr);
384 int __test_and_change_bit(unsigned long nr, volatile unsigned long *addr);
385
386These non-atomic variants also do not require any special memory
387barrier semantics.
388
389The routines xchg() and cmpxchg() need the same exact memory barriers
390as the atomic and bit operations returning values.
391
392Spinlocks and rwlocks have memory barrier expectations as well.
393The rule to follow is simple:
394
3951) When acquiring a lock, the implementation must make it globally
396 visible before any subsequent memory operation.
397
3982) When releasing a lock, the implementation must make it such that
399 all previous memory operations are globally visible before the
400 lock release.
401
402Which finally brings us to _atomic_dec_and_lock(). There is an
403architecture-neutral version implemented in lib/dec_and_lock.c,
404but most platforms will wish to optimize this in assembler.
405
406 int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock);
407
408Atomically decrement the given counter, and if will drop to zero
409atomically acquire the given spinlock and perform the decrement
410of the counter to zero. If it does not drop to zero, do nothing
411with the spinlock.
412
413It is actually pretty simple to get the memory barrier correct.
414Simply satisfy the spinlock grab requirements, which is make
415sure the spinlock operation is globally visible before any
416subsequent memory operation.
417
418We can demonstrate this operation more clearly if we define
419an abstract atomic operation:
420
421 long cas(long *mem, long old, long new);
422
423"cas" stands for "compare and swap". It atomically:
424
4251) Compares "old" with the value currently at "mem".
4262) If they are equal, "new" is written to "mem".
4273) Regardless, the current value at "mem" is returned.
428
429As an example usage, here is what an atomic counter update
430might look like:
431
432void example_atomic_inc(long *counter)
433{
434 long old, new, ret;
435
436 while (1) {
437 old = *counter;
438 new = old + 1;
439
440 ret = cas(counter, old, new);
441 if (ret == old)
442 break;
443 }
444}
445
446Let's use cas() in order to build a pseudo-C atomic_dec_and_lock():
447
448int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock)
449{
450 long old, new, ret;
451 int went_to_zero;
452
453 went_to_zero = 0;
454 while (1) {
455 old = atomic_read(atomic);
456 new = old - 1;
457 if (new == 0) {
458 went_to_zero = 1;
459 spin_lock(lock);
460 }
461 ret = cas(atomic, old, new);
462 if (ret == old)
463 break;
464 if (went_to_zero) {
465 spin_unlock(lock);
466 went_to_zero = 0;
467 }
468 }
469
470 return went_to_zero;
471}
472
473Now, as far as memory barriers go, as long as spin_lock()
474strictly orders all subsequent memory operations (including
475the cas()) with respect to itself, things will be fine.
476
477Said another way, _atomic_dec_and_lock() must guarentee that
478a counter dropping to zero is never made visible before the
479spinlock being acquired.
480
481Note that this also means that for the case where the counter
482is not dropping to zero, there are no memory ordering
483requirements.