Commit | Line | Data |
---|---|---|
dd81eca8 PM |
1 | What is RCU? |
2 | ||
3 | RCU is a synchronization mechanism that was added to the Linux kernel | |
4 | during the 2.5 development effort that is optimized for read-mostly | |
5 | situations. Although RCU is actually quite simple once you understand it, | |
6 | getting there can sometimes be a challenge. Part of the problem is that | |
7 | most of the past descriptions of RCU have been written with the mistaken | |
8 | assumption that there is "one true way" to describe RCU. Instead, | |
9 | the experience has been that different people must take different paths | |
10 | to arrive at an understanding of RCU. This document provides several | |
11 | different paths, as follows: | |
12 | ||
13 | 1. RCU OVERVIEW | |
14 | 2. WHAT IS RCU'S CORE API? | |
15 | 3. WHAT ARE SOME EXAMPLE USES OF CORE RCU API? | |
16 | 4. WHAT IF MY UPDATING THREAD CANNOT BLOCK? | |
17 | 5. WHAT ARE SOME SIMPLE IMPLEMENTATIONS OF RCU? | |
18 | 6. ANALOGY WITH READER-WRITER LOCKING | |
19 | 7. FULL LIST OF RCU APIs | |
20 | 8. ANSWERS TO QUICK QUIZZES | |
21 | ||
22 | People who prefer starting with a conceptual overview should focus on | |
23 | Section 1, though most readers will profit by reading this section at | |
24 | some point. People who prefer to start with an API that they can then | |
25 | experiment with should focus on Section 2. People who prefer to start | |
26 | with example uses should focus on Sections 3 and 4. People who need to | |
27 | understand the RCU implementation should focus on Section 5, then dive | |
28 | into the kernel source code. People who reason best by analogy should | |
29 | focus on Section 6. Section 7 serves as an index to the docbook API | |
30 | documentation, and Section 8 is the traditional answer key. | |
31 | ||
32 | So, start with the section that makes the most sense to you and your | |
33 | preferred method of learning. If you need to know everything about | |
34 | everything, feel free to read the whole thing -- but if you are really | |
35 | that type of person, you have perused the source code and will therefore | |
36 | never need this document anyway. ;-) | |
37 | ||
38 | ||
39 | 1. RCU OVERVIEW | |
40 | ||
41 | The basic idea behind RCU is to split updates into "removal" and | |
42 | "reclamation" phases. The removal phase removes references to data items | |
43 | within a data structure (possibly by replacing them with references to | |
44 | new versions of these data items), and can run concurrently with readers. | |
45 | The reason that it is safe to run the removal phase concurrently with | |
46 | readers is the semantics of modern CPUs guarantee that readers will see | |
47 | either the old or the new version of the data structure rather than a | |
48 | partially updated reference. The reclamation phase does the work of reclaiming | |
49 | (e.g., freeing) the data items removed from the data structure during the | |
50 | removal phase. Because reclaiming data items can disrupt any readers | |
51 | concurrently referencing those data items, the reclamation phase must | |
52 | not start until readers no longer hold references to those data items. | |
53 | ||
54 | Splitting the update into removal and reclamation phases permits the | |
55 | updater to perform the removal phase immediately, and to defer the | |
56 | reclamation phase until all readers active during the removal phase have | |
57 | completed, either by blocking until they finish or by registering a | |
58 | callback that is invoked after they finish. Only readers that are active | |
59 | during the removal phase need be considered, because any reader starting | |
60 | after the removal phase will be unable to gain a reference to the removed | |
61 | data items, and therefore cannot be disrupted by the reclamation phase. | |
62 | ||
63 | So the typical RCU update sequence goes something like the following: | |
64 | ||
65 | a. Remove pointers to a data structure, so that subsequent | |
66 | readers cannot gain a reference to it. | |
67 | ||
68 | b. Wait for all previous readers to complete their RCU read-side | |
69 | critical sections. | |
70 | ||
71 | c. At this point, there cannot be any readers who hold references | |
72 | to the data structure, so it now may safely be reclaimed | |
73 | (e.g., kfree()d). | |
74 | ||
75 | Step (b) above is the key idea underlying RCU's deferred destruction. | |
76 | The ability to wait until all readers are done allows RCU readers to | |
77 | use much lighter-weight synchronization, in some cases, absolutely no | |
78 | synchronization at all. In contrast, in more conventional lock-based | |
79 | schemes, readers must use heavy-weight synchronization in order to | |
80 | prevent an updater from deleting the data structure out from under them. | |
81 | This is because lock-based updaters typically update data items in place, | |
82 | and must therefore exclude readers. In contrast, RCU-based updaters | |
83 | typically take advantage of the fact that writes to single aligned | |
84 | pointers are atomic on modern CPUs, allowing atomic insertion, removal, | |
85 | and replacement of data items in a linked structure without disrupting | |
86 | readers. Concurrent RCU readers can then continue accessing the old | |
87 | versions, and can dispense with the atomic operations, memory barriers, | |
88 | and communications cache misses that are so expensive on present-day | |
89 | SMP computer systems, even in absence of lock contention. | |
90 | ||
91 | In the three-step procedure shown above, the updater is performing both | |
92 | the removal and the reclamation step, but it is often helpful for an | |
93 | entirely different thread to do the reclamation, as is in fact the case | |
94 | in the Linux kernel's directory-entry cache (dcache). Even if the same | |
95 | thread performs both the update step (step (a) above) and the reclamation | |
96 | step (step (c) above), it is often helpful to think of them separately. | |
97 | For example, RCU readers and updaters need not communicate at all, | |
98 | but RCU provides implicit low-overhead communication between readers | |
99 | and reclaimers, namely, in step (b) above. | |
100 | ||
101 | So how the heck can a reclaimer tell when a reader is done, given | |
102 | that readers are not doing any sort of synchronization operations??? | |
103 | Read on to learn about how RCU's API makes this easy. | |
104 | ||
105 | ||
106 | 2. WHAT IS RCU'S CORE API? | |
107 | ||
108 | The core RCU API is quite small: | |
109 | ||
110 | a. rcu_read_lock() | |
111 | b. rcu_read_unlock() | |
112 | c. synchronize_rcu() / call_rcu() | |
113 | d. rcu_assign_pointer() | |
114 | e. rcu_dereference() | |
115 | ||
116 | There are many other members of the RCU API, but the rest can be | |
117 | expressed in terms of these five, though most implementations instead | |
118 | express synchronize_rcu() in terms of the call_rcu() callback API. | |
119 | ||
120 | The five core RCU APIs are described below, the other 18 will be enumerated | |
121 | later. See the kernel docbook documentation for more info, or look directly | |
122 | at the function header comments. | |
123 | ||
124 | rcu_read_lock() | |
125 | ||
126 | void rcu_read_lock(void); | |
127 | ||
128 | Used by a reader to inform the reclaimer that the reader is | |
129 | entering an RCU read-side critical section. It is illegal | |
130 | to block while in an RCU read-side critical section, though | |
131 | kernels built with CONFIG_PREEMPT_RCU can preempt RCU read-side | |
132 | critical sections. Any RCU-protected data structure accessed | |
133 | during an RCU read-side critical section is guaranteed to remain | |
134 | unreclaimed for the full duration of that critical section. | |
135 | Reference counts may be used in conjunction with RCU to maintain | |
136 | longer-term references to data structures. | |
137 | ||
138 | rcu_read_unlock() | |
139 | ||
140 | void rcu_read_unlock(void); | |
141 | ||
142 | Used by a reader to inform the reclaimer that the reader is | |
143 | exiting an RCU read-side critical section. Note that RCU | |
144 | read-side critical sections may be nested and/or overlapping. | |
145 | ||
146 | synchronize_rcu() | |
147 | ||
148 | void synchronize_rcu(void); | |
149 | ||
150 | Marks the end of updater code and the beginning of reclaimer | |
151 | code. It does this by blocking until all pre-existing RCU | |
152 | read-side critical sections on all CPUs have completed. | |
153 | Note that synchronize_rcu() will -not- necessarily wait for | |
154 | any subsequent RCU read-side critical sections to complete. | |
155 | For example, consider the following sequence of events: | |
156 | ||
157 | CPU 0 CPU 1 CPU 2 | |
158 | ----------------- ------------------------- --------------- | |
159 | 1. rcu_read_lock() | |
160 | 2. enters synchronize_rcu() | |
161 | 3. rcu_read_lock() | |
162 | 4. rcu_read_unlock() | |
163 | 5. exits synchronize_rcu() | |
164 | 6. rcu_read_unlock() | |
165 | ||
166 | To reiterate, synchronize_rcu() waits only for ongoing RCU | |
167 | read-side critical sections to complete, not necessarily for | |
168 | any that begin after synchronize_rcu() is invoked. | |
169 | ||
170 | Of course, synchronize_rcu() does not necessarily return | |
171 | -immediately- after the last pre-existing RCU read-side critical | |
172 | section completes. For one thing, there might well be scheduling | |
173 | delays. For another thing, many RCU implementations process | |
174 | requests in batches in order to improve efficiencies, which can | |
175 | further delay synchronize_rcu(). | |
176 | ||
177 | Since synchronize_rcu() is the API that must figure out when | |
178 | readers are done, its implementation is key to RCU. For RCU | |
179 | to be useful in all but the most read-intensive situations, | |
180 | synchronize_rcu()'s overhead must also be quite small. | |
181 | ||
182 | The call_rcu() API is a callback form of synchronize_rcu(), | |
183 | and is described in more detail in a later section. Instead of | |
184 | blocking, it registers a function and argument which are invoked | |
185 | after all ongoing RCU read-side critical sections have completed. | |
186 | This callback variant is particularly useful in situations where | |
165d6c78 PM |
187 | it is illegal to block or where update-side performance is |
188 | critically important. | |
189 | ||
190 | However, the call_rcu() API should not be used lightly, as use | |
191 | of the synchronize_rcu() API generally results in simpler code. | |
192 | In addition, the synchronize_rcu() API has the nice property | |
193 | of automatically limiting update rate should grace periods | |
194 | be delayed. This property results in system resilience in face | |
195 | of denial-of-service attacks. Code using call_rcu() should limit | |
196 | update rate in order to gain this same sort of resilience. See | |
197 | checklist.txt for some approaches to limiting the update rate. | |
dd81eca8 PM |
198 | |
199 | rcu_assign_pointer() | |
200 | ||
201 | typeof(p) rcu_assign_pointer(p, typeof(p) v); | |
202 | ||
203 | Yes, rcu_assign_pointer() -is- implemented as a macro, though it | |
204 | would be cool to be able to declare a function in this manner. | |
205 | (Compiler experts will no doubt disagree.) | |
206 | ||
207 | The updater uses this function to assign a new value to an | |
208 | RCU-protected pointer, in order to safely communicate the change | |
209 | in value from the updater to the reader. This function returns | |
210 | the new value, and also executes any memory-barrier instructions | |
211 | required for a given CPU architecture. | |
212 | ||
d19720a9 PM |
213 | Perhaps just as important, it serves to document (1) which |
214 | pointers are protected by RCU and (2) the point at which a | |
215 | given structure becomes accessible to other CPUs. That said, | |
216 | rcu_assign_pointer() is most frequently used indirectly, via | |
217 | the _rcu list-manipulation primitives such as list_add_rcu(). | |
dd81eca8 PM |
218 | |
219 | rcu_dereference() | |
220 | ||
221 | typeof(p) rcu_dereference(p); | |
222 | ||
223 | Like rcu_assign_pointer(), rcu_dereference() must be implemented | |
224 | as a macro. | |
225 | ||
226 | The reader uses rcu_dereference() to fetch an RCU-protected | |
227 | pointer, which returns a value that may then be safely | |
228 | dereferenced. Note that rcu_deference() does not actually | |
229 | dereference the pointer, instead, it protects the pointer for | |
230 | later dereferencing. It also executes any needed memory-barrier | |
231 | instructions for a given CPU architecture. Currently, only Alpha | |
232 | needs memory barriers within rcu_dereference() -- on other CPUs, | |
233 | it compiles to nothing, not even a compiler directive. | |
234 | ||
235 | Common coding practice uses rcu_dereference() to copy an | |
236 | RCU-protected pointer to a local variable, then dereferences | |
237 | this local variable, for example as follows: | |
238 | ||
239 | p = rcu_dereference(head.next); | |
240 | return p->data; | |
241 | ||
242 | However, in this case, one could just as easily combine these | |
243 | into one statement: | |
244 | ||
245 | return rcu_dereference(head.next)->data; | |
246 | ||
247 | If you are going to be fetching multiple fields from the | |
248 | RCU-protected structure, using the local variable is of | |
249 | course preferred. Repeated rcu_dereference() calls look | |
250 | ugly and incur unnecessary overhead on Alpha CPUs. | |
251 | ||
252 | Note that the value returned by rcu_dereference() is valid | |
253 | only within the enclosing RCU read-side critical section. | |
254 | For example, the following is -not- legal: | |
255 | ||
256 | rcu_read_lock(); | |
257 | p = rcu_dereference(head.next); | |
258 | rcu_read_unlock(); | |
259 | x = p->address; | |
260 | rcu_read_lock(); | |
261 | y = p->data; | |
262 | rcu_read_unlock(); | |
263 | ||
264 | Holding a reference from one RCU read-side critical section | |
265 | to another is just as illegal as holding a reference from | |
266 | one lock-based critical section to another! Similarly, | |
267 | using a reference outside of the critical section in which | |
268 | it was acquired is just as illegal as doing so with normal | |
269 | locking. | |
270 | ||
271 | As with rcu_assign_pointer(), an important function of | |
d19720a9 PM |
272 | rcu_dereference() is to document which pointers are protected by |
273 | RCU, in particular, flagging a pointer that is subject to changing | |
274 | at any time, including immediately after the rcu_dereference(). | |
275 | And, again like rcu_assign_pointer(), rcu_dereference() is | |
276 | typically used indirectly, via the _rcu list-manipulation | |
dd81eca8 PM |
277 | primitives, such as list_for_each_entry_rcu(). |
278 | ||
279 | The following diagram shows how each API communicates among the | |
280 | reader, updater, and reclaimer. | |
281 | ||
282 | ||
283 | rcu_assign_pointer() | |
284 | +--------+ | |
285 | +---------------------->| reader |---------+ | |
286 | | +--------+ | | |
287 | | | | | |
288 | | | | Protect: | |
289 | | | | rcu_read_lock() | |
290 | | | | rcu_read_unlock() | |
291 | | rcu_dereference() | | | |
292 | +---------+ | | | |
293 | | updater |<---------------------+ | | |
294 | +---------+ V | |
295 | | +-----------+ | |
296 | +----------------------------------->| reclaimer | | |
297 | +-----------+ | |
298 | Defer: | |
299 | synchronize_rcu() & call_rcu() | |
300 | ||
301 | ||
302 | The RCU infrastructure observes the time sequence of rcu_read_lock(), | |
303 | rcu_read_unlock(), synchronize_rcu(), and call_rcu() invocations in | |
304 | order to determine when (1) synchronize_rcu() invocations may return | |
305 | to their callers and (2) call_rcu() callbacks may be invoked. Efficient | |
306 | implementations of the RCU infrastructure make heavy use of batching in | |
307 | order to amortize their overhead over many uses of the corresponding APIs. | |
308 | ||
309 | There are no fewer than three RCU mechanisms in the Linux kernel; the | |
310 | diagram above shows the first one, which is by far the most commonly used. | |
311 | The rcu_dereference() and rcu_assign_pointer() primitives are used for | |
312 | all three mechanisms, but different defer and protect primitives are | |
313 | used as follows: | |
314 | ||
315 | Defer Protect | |
316 | ||
317 | a. synchronize_rcu() rcu_read_lock() / rcu_read_unlock() | |
318 | call_rcu() | |
319 | ||
320 | b. call_rcu_bh() rcu_read_lock_bh() / rcu_read_unlock_bh() | |
321 | ||
322 | c. synchronize_sched() preempt_disable() / preempt_enable() | |
323 | local_irq_save() / local_irq_restore() | |
324 | hardirq enter / hardirq exit | |
325 | NMI enter / NMI exit | |
326 | ||
327 | These three mechanisms are used as follows: | |
328 | ||
329 | a. RCU applied to normal data structures. | |
330 | ||
331 | b. RCU applied to networking data structures that may be subjected | |
332 | to remote denial-of-service attacks. | |
333 | ||
334 | c. RCU applied to scheduler and interrupt/NMI-handler tasks. | |
335 | ||
336 | Again, most uses will be of (a). The (b) and (c) cases are important | |
337 | for specialized uses, but are relatively uncommon. | |
338 | ||
339 | ||
340 | 3. WHAT ARE SOME EXAMPLE USES OF CORE RCU API? | |
341 | ||
342 | This section shows a simple use of the core RCU API to protect a | |
d19720a9 | 343 | global pointer to a dynamically allocated structure. More-typical |
dd81eca8 PM |
344 | uses of RCU may be found in listRCU.txt, arrayRCU.txt, and NMI-RCU.txt. |
345 | ||
346 | struct foo { | |
347 | int a; | |
348 | char b; | |
349 | long c; | |
350 | }; | |
351 | DEFINE_SPINLOCK(foo_mutex); | |
352 | ||
353 | struct foo *gbl_foo; | |
354 | ||
355 | /* | |
356 | * Create a new struct foo that is the same as the one currently | |
357 | * pointed to by gbl_foo, except that field "a" is replaced | |
358 | * with "new_a". Points gbl_foo to the new structure, and | |
359 | * frees up the old structure after a grace period. | |
360 | * | |
361 | * Uses rcu_assign_pointer() to ensure that concurrent readers | |
362 | * see the initialized version of the new structure. | |
363 | * | |
364 | * Uses synchronize_rcu() to ensure that any readers that might | |
365 | * have references to the old structure complete before freeing | |
366 | * the old structure. | |
367 | */ | |
368 | void foo_update_a(int new_a) | |
369 | { | |
370 | struct foo *new_fp; | |
371 | struct foo *old_fp; | |
372 | ||
de0dfcdf | 373 | new_fp = kmalloc(sizeof(*new_fp), GFP_KERNEL); |
dd81eca8 PM |
374 | spin_lock(&foo_mutex); |
375 | old_fp = gbl_foo; | |
376 | *new_fp = *old_fp; | |
377 | new_fp->a = new_a; | |
378 | rcu_assign_pointer(gbl_foo, new_fp); | |
379 | spin_unlock(&foo_mutex); | |
380 | synchronize_rcu(); | |
381 | kfree(old_fp); | |
382 | } | |
383 | ||
384 | /* | |
385 | * Return the value of field "a" of the current gbl_foo | |
386 | * structure. Use rcu_read_lock() and rcu_read_unlock() | |
387 | * to ensure that the structure does not get deleted out | |
388 | * from under us, and use rcu_dereference() to ensure that | |
389 | * we see the initialized version of the structure (important | |
390 | * for DEC Alpha and for people reading the code). | |
391 | */ | |
392 | int foo_get_a(void) | |
393 | { | |
394 | int retval; | |
395 | ||
396 | rcu_read_lock(); | |
397 | retval = rcu_dereference(gbl_foo)->a; | |
398 | rcu_read_unlock(); | |
399 | return retval; | |
400 | } | |
401 | ||
402 | So, to sum up: | |
403 | ||
404 | o Use rcu_read_lock() and rcu_read_unlock() to guard RCU | |
405 | read-side critical sections. | |
406 | ||
407 | o Within an RCU read-side critical section, use rcu_dereference() | |
408 | to dereference RCU-protected pointers. | |
409 | ||
410 | o Use some solid scheme (such as locks or semaphores) to | |
411 | keep concurrent updates from interfering with each other. | |
412 | ||
413 | o Use rcu_assign_pointer() to update an RCU-protected pointer. | |
414 | This primitive protects concurrent readers from the updater, | |
415 | -not- concurrent updates from each other! You therefore still | |
416 | need to use locking (or something similar) to keep concurrent | |
417 | rcu_assign_pointer() primitives from interfering with each other. | |
418 | ||
419 | o Use synchronize_rcu() -after- removing a data element from an | |
420 | RCU-protected data structure, but -before- reclaiming/freeing | |
421 | the data element, in order to wait for the completion of all | |
422 | RCU read-side critical sections that might be referencing that | |
423 | data item. | |
424 | ||
425 | See checklist.txt for additional rules to follow when using RCU. | |
d19720a9 PM |
426 | And again, more-typical uses of RCU may be found in listRCU.txt, |
427 | arrayRCU.txt, and NMI-RCU.txt. | |
dd81eca8 PM |
428 | |
429 | ||
430 | 4. WHAT IF MY UPDATING THREAD CANNOT BLOCK? | |
431 | ||
432 | In the example above, foo_update_a() blocks until a grace period elapses. | |
433 | This is quite simple, but in some cases one cannot afford to wait so | |
434 | long -- there might be other high-priority work to be done. | |
435 | ||
436 | In such cases, one uses call_rcu() rather than synchronize_rcu(). | |
437 | The call_rcu() API is as follows: | |
438 | ||
439 | void call_rcu(struct rcu_head * head, | |
440 | void (*func)(struct rcu_head *head)); | |
441 | ||
442 | This function invokes func(head) after a grace period has elapsed. | |
443 | This invocation might happen from either softirq or process context, | |
444 | so the function is not permitted to block. The foo struct needs to | |
445 | have an rcu_head structure added, perhaps as follows: | |
446 | ||
447 | struct foo { | |
448 | int a; | |
449 | char b; | |
450 | long c; | |
451 | struct rcu_head rcu; | |
452 | }; | |
453 | ||
454 | The foo_update_a() function might then be written as follows: | |
455 | ||
456 | /* | |
457 | * Create a new struct foo that is the same as the one currently | |
458 | * pointed to by gbl_foo, except that field "a" is replaced | |
459 | * with "new_a". Points gbl_foo to the new structure, and | |
460 | * frees up the old structure after a grace period. | |
461 | * | |
462 | * Uses rcu_assign_pointer() to ensure that concurrent readers | |
463 | * see the initialized version of the new structure. | |
464 | * | |
465 | * Uses call_rcu() to ensure that any readers that might have | |
466 | * references to the old structure complete before freeing the | |
467 | * old structure. | |
468 | */ | |
469 | void foo_update_a(int new_a) | |
470 | { | |
471 | struct foo *new_fp; | |
472 | struct foo *old_fp; | |
473 | ||
de0dfcdf | 474 | new_fp = kmalloc(sizeof(*new_fp), GFP_KERNEL); |
dd81eca8 PM |
475 | spin_lock(&foo_mutex); |
476 | old_fp = gbl_foo; | |
477 | *new_fp = *old_fp; | |
478 | new_fp->a = new_a; | |
479 | rcu_assign_pointer(gbl_foo, new_fp); | |
480 | spin_unlock(&foo_mutex); | |
481 | call_rcu(&old_fp->rcu, foo_reclaim); | |
482 | } | |
483 | ||
484 | The foo_reclaim() function might appear as follows: | |
485 | ||
486 | void foo_reclaim(struct rcu_head *rp) | |
487 | { | |
488 | struct foo *fp = container_of(rp, struct foo, rcu); | |
489 | ||
490 | kfree(fp); | |
491 | } | |
492 | ||
493 | The container_of() primitive is a macro that, given a pointer into a | |
494 | struct, the type of the struct, and the pointed-to field within the | |
495 | struct, returns a pointer to the beginning of the struct. | |
496 | ||
497 | The use of call_rcu() permits the caller of foo_update_a() to | |
498 | immediately regain control, without needing to worry further about the | |
499 | old version of the newly updated element. It also clearly shows the | |
500 | RCU distinction between updater, namely foo_update_a(), and reclaimer, | |
501 | namely foo_reclaim(). | |
502 | ||
503 | The summary of advice is the same as for the previous section, except | |
504 | that we are now using call_rcu() rather than synchronize_rcu(): | |
505 | ||
506 | o Use call_rcu() -after- removing a data element from an | |
507 | RCU-protected data structure in order to register a callback | |
508 | function that will be invoked after the completion of all RCU | |
509 | read-side critical sections that might be referencing that | |
510 | data item. | |
511 | ||
512 | Again, see checklist.txt for additional rules governing the use of RCU. | |
513 | ||
514 | ||
515 | 5. WHAT ARE SOME SIMPLE IMPLEMENTATIONS OF RCU? | |
516 | ||
517 | One of the nice things about RCU is that it has extremely simple "toy" | |
518 | implementations that are a good first step towards understanding the | |
519 | production-quality implementations in the Linux kernel. This section | |
520 | presents two such "toy" implementations of RCU, one that is implemented | |
521 | in terms of familiar locking primitives, and another that more closely | |
522 | resembles "classic" RCU. Both are way too simple for real-world use, | |
523 | lacking both functionality and performance. However, they are useful | |
524 | in getting a feel for how RCU works. See kernel/rcupdate.c for a | |
525 | production-quality implementation, and see: | |
526 | ||
527 | http://www.rdrop.com/users/paulmck/RCU | |
528 | ||
529 | for papers describing the Linux kernel RCU implementation. The OLS'01 | |
530 | and OLS'02 papers are a good introduction, and the dissertation provides | |
d19720a9 | 531 | more details on the current implementation as of early 2004. |
dd81eca8 PM |
532 | |
533 | ||
534 | 5A. "TOY" IMPLEMENTATION #1: LOCKING | |
535 | ||
536 | This section presents a "toy" RCU implementation that is based on | |
537 | familiar locking primitives. Its overhead makes it a non-starter for | |
538 | real-life use, as does its lack of scalability. It is also unsuitable | |
539 | for realtime use, since it allows scheduling latency to "bleed" from | |
540 | one read-side critical section to another. | |
541 | ||
542 | However, it is probably the easiest implementation to relate to, so is | |
543 | a good starting point. | |
544 | ||
545 | It is extremely simple: | |
546 | ||
547 | static DEFINE_RWLOCK(rcu_gp_mutex); | |
548 | ||
549 | void rcu_read_lock(void) | |
550 | { | |
551 | read_lock(&rcu_gp_mutex); | |
552 | } | |
553 | ||
554 | void rcu_read_unlock(void) | |
555 | { | |
556 | read_unlock(&rcu_gp_mutex); | |
557 | } | |
558 | ||
559 | void synchronize_rcu(void) | |
560 | { | |
561 | write_lock(&rcu_gp_mutex); | |
562 | write_unlock(&rcu_gp_mutex); | |
563 | } | |
564 | ||
565 | [You can ignore rcu_assign_pointer() and rcu_dereference() without | |
566 | missing much. But here they are anyway. And whatever you do, don't | |
567 | forget about them when submitting patches making use of RCU!] | |
568 | ||
569 | #define rcu_assign_pointer(p, v) ({ \ | |
570 | smp_wmb(); \ | |
571 | (p) = (v); \ | |
572 | }) | |
573 | ||
574 | #define rcu_dereference(p) ({ \ | |
575 | typeof(p) _________p1 = p; \ | |
576 | smp_read_barrier_depends(); \ | |
577 | (_________p1); \ | |
578 | }) | |
579 | ||
580 | ||
581 | The rcu_read_lock() and rcu_read_unlock() primitive read-acquire | |
582 | and release a global reader-writer lock. The synchronize_rcu() | |
583 | primitive write-acquires this same lock, then immediately releases | |
584 | it. This means that once synchronize_rcu() exits, all RCU read-side | |
585 | critical sections that were in progress before synchonize_rcu() was | |
586 | called are guaranteed to have completed -- there is no way that | |
587 | synchronize_rcu() would have been able to write-acquire the lock | |
588 | otherwise. | |
589 | ||
590 | It is possible to nest rcu_read_lock(), since reader-writer locks may | |
591 | be recursively acquired. Note also that rcu_read_lock() is immune | |
592 | from deadlock (an important property of RCU). The reason for this is | |
593 | that the only thing that can block rcu_read_lock() is a synchronize_rcu(). | |
594 | But synchronize_rcu() does not acquire any locks while holding rcu_gp_mutex, | |
595 | so there can be no deadlock cycle. | |
596 | ||
597 | Quick Quiz #1: Why is this argument naive? How could a deadlock | |
598 | occur when using this algorithm in a real-world Linux | |
599 | kernel? How could this deadlock be avoided? | |
600 | ||
601 | ||
602 | 5B. "TOY" EXAMPLE #2: CLASSIC RCU | |
603 | ||
604 | This section presents a "toy" RCU implementation that is based on | |
605 | "classic RCU". It is also short on performance (but only for updates) and | |
606 | on features such as hotplug CPU and the ability to run in CONFIG_PREEMPT | |
607 | kernels. The definitions of rcu_dereference() and rcu_assign_pointer() | |
608 | are the same as those shown in the preceding section, so they are omitted. | |
609 | ||
610 | void rcu_read_lock(void) { } | |
611 | ||
612 | void rcu_read_unlock(void) { } | |
613 | ||
614 | void synchronize_rcu(void) | |
615 | { | |
616 | int cpu; | |
617 | ||
3c30a752 | 618 | for_each_possible_cpu(cpu) |
dd81eca8 PM |
619 | run_on(cpu); |
620 | } | |
621 | ||
622 | Note that rcu_read_lock() and rcu_read_unlock() do absolutely nothing. | |
623 | This is the great strength of classic RCU in a non-preemptive kernel: | |
624 | read-side overhead is precisely zero, at least on non-Alpha CPUs. | |
625 | And there is absolutely no way that rcu_read_lock() can possibly | |
626 | participate in a deadlock cycle! | |
627 | ||
628 | The implementation of synchronize_rcu() simply schedules itself on each | |
629 | CPU in turn. The run_on() primitive can be implemented straightforwardly | |
630 | in terms of the sched_setaffinity() primitive. Of course, a somewhat less | |
631 | "toy" implementation would restore the affinity upon completion rather | |
632 | than just leaving all tasks running on the last CPU, but when I said | |
633 | "toy", I meant -toy-! | |
634 | ||
635 | So how the heck is this supposed to work??? | |
636 | ||
637 | Remember that it is illegal to block while in an RCU read-side critical | |
638 | section. Therefore, if a given CPU executes a context switch, we know | |
639 | that it must have completed all preceding RCU read-side critical sections. | |
640 | Once -all- CPUs have executed a context switch, then -all- preceding | |
641 | RCU read-side critical sections will have completed. | |
642 | ||
643 | So, suppose that we remove a data item from its structure and then invoke | |
644 | synchronize_rcu(). Once synchronize_rcu() returns, we are guaranteed | |
645 | that there are no RCU read-side critical sections holding a reference | |
646 | to that data item, so we can safely reclaim it. | |
647 | ||
648 | Quick Quiz #2: Give an example where Classic RCU's read-side | |
649 | overhead is -negative-. | |
650 | ||
651 | Quick Quiz #3: If it is illegal to block in an RCU read-side | |
652 | critical section, what the heck do you do in | |
653 | PREEMPT_RT, where normal spinlocks can block??? | |
654 | ||
655 | ||
656 | 6. ANALOGY WITH READER-WRITER LOCKING | |
657 | ||
658 | Although RCU can be used in many different ways, a very common use of | |
659 | RCU is analogous to reader-writer locking. The following unified | |
660 | diff shows how closely related RCU and reader-writer locking can be. | |
661 | ||
662 | @@ -13,15 +14,15 @@ | |
663 | struct list_head *lp; | |
664 | struct el *p; | |
665 | ||
666 | - read_lock(); | |
667 | - list_for_each_entry(p, head, lp) { | |
668 | + rcu_read_lock(); | |
669 | + list_for_each_entry_rcu(p, head, lp) { | |
670 | if (p->key == key) { | |
671 | *result = p->data; | |
672 | - read_unlock(); | |
673 | + rcu_read_unlock(); | |
674 | return 1; | |
675 | } | |
676 | } | |
677 | - read_unlock(); | |
678 | + rcu_read_unlock(); | |
679 | return 0; | |
680 | } | |
681 | ||
682 | @@ -29,15 +30,16 @@ | |
683 | { | |
684 | struct el *p; | |
685 | ||
686 | - write_lock(&listmutex); | |
687 | + spin_lock(&listmutex); | |
688 | list_for_each_entry(p, head, lp) { | |
689 | if (p->key == key) { | |
690 | list_del(&p->list); | |
691 | - write_unlock(&listmutex); | |
692 | + spin_unlock(&listmutex); | |
693 | + synchronize_rcu(); | |
694 | kfree(p); | |
695 | return 1; | |
696 | } | |
697 | } | |
698 | - write_unlock(&listmutex); | |
699 | + spin_unlock(&listmutex); | |
700 | return 0; | |
701 | } | |
702 | ||
703 | Or, for those who prefer a side-by-side listing: | |
704 | ||
705 | 1 struct el { 1 struct el { | |
706 | 2 struct list_head list; 2 struct list_head list; | |
707 | 3 long key; 3 long key; | |
708 | 4 spinlock_t mutex; 4 spinlock_t mutex; | |
709 | 5 int data; 5 int data; | |
710 | 6 /* Other data fields */ 6 /* Other data fields */ | |
711 | 7 }; 7 }; | |
712 | 8 spinlock_t listmutex; 8 spinlock_t listmutex; | |
713 | 9 struct el head; 9 struct el head; | |
714 | ||
715 | 1 int search(long key, int *result) 1 int search(long key, int *result) | |
716 | 2 { 2 { | |
717 | 3 struct list_head *lp; 3 struct list_head *lp; | |
718 | 4 struct el *p; 4 struct el *p; | |
719 | 5 5 | |
720 | 6 read_lock(); 6 rcu_read_lock(); | |
721 | 7 list_for_each_entry(p, head, lp) { 7 list_for_each_entry_rcu(p, head, lp) { | |
722 | 8 if (p->key == key) { 8 if (p->key == key) { | |
723 | 9 *result = p->data; 9 *result = p->data; | |
724 | 10 read_unlock(); 10 rcu_read_unlock(); | |
725 | 11 return 1; 11 return 1; | |
726 | 12 } 12 } | |
727 | 13 } 13 } | |
728 | 14 read_unlock(); 14 rcu_read_unlock(); | |
729 | 15 return 0; 15 return 0; | |
730 | 16 } 16 } | |
731 | ||
732 | 1 int delete(long key) 1 int delete(long key) | |
733 | 2 { 2 { | |
734 | 3 struct el *p; 3 struct el *p; | |
735 | 4 4 | |
736 | 5 write_lock(&listmutex); 5 spin_lock(&listmutex); | |
737 | 6 list_for_each_entry(p, head, lp) { 6 list_for_each_entry(p, head, lp) { | |
738 | 7 if (p->key == key) { 7 if (p->key == key) { | |
739 | 8 list_del(&p->list); 8 list_del(&p->list); | |
740 | 9 write_unlock(&listmutex); 9 spin_unlock(&listmutex); | |
741 | 10 synchronize_rcu(); | |
742 | 10 kfree(p); 11 kfree(p); | |
743 | 11 return 1; 12 return 1; | |
744 | 12 } 13 } | |
745 | 13 } 14 } | |
746 | 14 write_unlock(&listmutex); 15 spin_unlock(&listmutex); | |
747 | 15 return 0; 16 return 0; | |
748 | 16 } 17 } | |
749 | ||
750 | Either way, the differences are quite small. Read-side locking moves | |
751 | to rcu_read_lock() and rcu_read_unlock, update-side locking moves from | |
752 | from a reader-writer lock to a simple spinlock, and a synchronize_rcu() | |
753 | precedes the kfree(). | |
754 | ||
755 | However, there is one potential catch: the read-side and update-side | |
756 | critical sections can now run concurrently. In many cases, this will | |
757 | not be a problem, but it is necessary to check carefully regardless. | |
758 | For example, if multiple independent list updates must be seen as | |
759 | a single atomic update, converting to RCU will require special care. | |
760 | ||
761 | Also, the presence of synchronize_rcu() means that the RCU version of | |
762 | delete() can now block. If this is a problem, there is a callback-based | |
763 | mechanism that never blocks, namely call_rcu(), that can be used in | |
764 | place of synchronize_rcu(). | |
765 | ||
766 | ||
767 | 7. FULL LIST OF RCU APIs | |
768 | ||
769 | The RCU APIs are documented in docbook-format header comments in the | |
770 | Linux-kernel source code, but it helps to have a full list of the | |
771 | APIs, since there does not appear to be a way to categorize them | |
772 | in docbook. Here is the list, by category. | |
773 | ||
774 | Markers for RCU read-side critical sections: | |
775 | ||
776 | rcu_read_lock | |
777 | rcu_read_unlock | |
778 | rcu_read_lock_bh | |
779 | rcu_read_unlock_bh | |
780 | ||
781 | RCU pointer/list traversal: | |
782 | ||
783 | rcu_dereference | |
784 | list_for_each_rcu (to be deprecated in favor of | |
785 | list_for_each_entry_rcu) | |
dd81eca8 PM |
786 | list_for_each_entry_rcu |
787 | list_for_each_continue_rcu (to be deprecated in favor of new | |
788 | list_for_each_entry_continue_rcu) | |
dd81eca8 PM |
789 | hlist_for_each_entry_rcu |
790 | ||
791 | RCU pointer update: | |
792 | ||
793 | rcu_assign_pointer | |
794 | list_add_rcu | |
795 | list_add_tail_rcu | |
796 | list_del_rcu | |
797 | list_replace_rcu | |
798 | hlist_del_rcu | |
799 | hlist_add_head_rcu | |
800 | ||
801 | RCU grace period: | |
802 | ||
dd81eca8 PM |
803 | synchronize_net |
804 | synchronize_sched | |
805 | synchronize_rcu | |
806 | call_rcu | |
807 | call_rcu_bh | |
808 | ||
809 | See the comment headers in the source code (or the docbook generated | |
810 | from them) for more information. | |
811 | ||
812 | ||
813 | 8. ANSWERS TO QUICK QUIZZES | |
814 | ||
815 | Quick Quiz #1: Why is this argument naive? How could a deadlock | |
816 | occur when using this algorithm in a real-world Linux | |
817 | kernel? [Referring to the lock-based "toy" RCU | |
818 | algorithm.] | |
819 | ||
820 | Answer: Consider the following sequence of events: | |
821 | ||
822 | 1. CPU 0 acquires some unrelated lock, call it | |
d19720a9 PM |
823 | "problematic_lock", disabling irq via |
824 | spin_lock_irqsave(). | |
dd81eca8 PM |
825 | |
826 | 2. CPU 1 enters synchronize_rcu(), write-acquiring | |
827 | rcu_gp_mutex. | |
828 | ||
829 | 3. CPU 0 enters rcu_read_lock(), but must wait | |
830 | because CPU 1 holds rcu_gp_mutex. | |
831 | ||
832 | 4. CPU 1 is interrupted, and the irq handler | |
833 | attempts to acquire problematic_lock. | |
834 | ||
835 | The system is now deadlocked. | |
836 | ||
837 | One way to avoid this deadlock is to use an approach like | |
838 | that of CONFIG_PREEMPT_RT, where all normal spinlocks | |
839 | become blocking locks, and all irq handlers execute in | |
840 | the context of special tasks. In this case, in step 4 | |
841 | above, the irq handler would block, allowing CPU 1 to | |
842 | release rcu_gp_mutex, avoiding the deadlock. | |
843 | ||
844 | Even in the absence of deadlock, this RCU implementation | |
845 | allows latency to "bleed" from readers to other | |
846 | readers through synchronize_rcu(). To see this, | |
847 | consider task A in an RCU read-side critical section | |
848 | (thus read-holding rcu_gp_mutex), task B blocked | |
849 | attempting to write-acquire rcu_gp_mutex, and | |
850 | task C blocked in rcu_read_lock() attempting to | |
851 | read_acquire rcu_gp_mutex. Task A's RCU read-side | |
852 | latency is holding up task C, albeit indirectly via | |
853 | task B. | |
854 | ||
855 | Realtime RCU implementations therefore use a counter-based | |
856 | approach where tasks in RCU read-side critical sections | |
857 | cannot be blocked by tasks executing synchronize_rcu(). | |
858 | ||
859 | Quick Quiz #2: Give an example where Classic RCU's read-side | |
860 | overhead is -negative-. | |
861 | ||
862 | Answer: Imagine a single-CPU system with a non-CONFIG_PREEMPT | |
863 | kernel where a routing table is used by process-context | |
864 | code, but can be updated by irq-context code (for example, | |
865 | by an "ICMP REDIRECT" packet). The usual way of handling | |
866 | this would be to have the process-context code disable | |
867 | interrupts while searching the routing table. Use of | |
868 | RCU allows such interrupt-disabling to be dispensed with. | |
869 | Thus, without RCU, you pay the cost of disabling interrupts, | |
870 | and with RCU you don't. | |
871 | ||
872 | One can argue that the overhead of RCU in this | |
873 | case is negative with respect to the single-CPU | |
874 | interrupt-disabling approach. Others might argue that | |
875 | the overhead of RCU is merely zero, and that replacing | |
876 | the positive overhead of the interrupt-disabling scheme | |
877 | with the zero-overhead RCU scheme does not constitute | |
878 | negative overhead. | |
879 | ||
880 | In real life, of course, things are more complex. But | |
881 | even the theoretical possibility of negative overhead for | |
882 | a synchronization primitive is a bit unexpected. ;-) | |
883 | ||
884 | Quick Quiz #3: If it is illegal to block in an RCU read-side | |
885 | critical section, what the heck do you do in | |
886 | PREEMPT_RT, where normal spinlocks can block??? | |
887 | ||
888 | Answer: Just as PREEMPT_RT permits preemption of spinlock | |
889 | critical sections, it permits preemption of RCU | |
890 | read-side critical sections. It also permits | |
891 | spinlocks blocking while in RCU read-side critical | |
892 | sections. | |
893 | ||
894 | Why the apparent inconsistency? Because it is it | |
895 | possible to use priority boosting to keep the RCU | |
896 | grace periods short if need be (for example, if running | |
897 | short of memory). In contrast, if blocking waiting | |
898 | for (say) network reception, there is no way to know | |
899 | what should be boosted. Especially given that the | |
900 | process we need to boost might well be a human being | |
901 | who just went out for a pizza or something. And although | |
902 | a computer-operated cattle prod might arouse serious | |
903 | interest, it might also provoke serious objections. | |
904 | Besides, how does the computer know what pizza parlor | |
905 | the human being went to??? | |
906 | ||
907 | ||
908 | ACKNOWLEDGEMENTS | |
909 | ||
910 | My thanks to the people who helped make this human-readable, including | |
d19720a9 | 911 | Jon Walpole, Josh Triplett, Serge Hallyn, Suzanne Wood, and Alan Stern. |
dd81eca8 PM |
912 | |
913 | ||
914 | For more information, see http://www.rdrop.com/users/paulmck/RCU. |