drivers: power: report battery voltage in AOSP compatible format
[GitHub/mt8127/android_kernel_alcatel_ttab.git] / Documentation / DMA-API-HOWTO.txt
1 Dynamic DMA mapping Guide
2 =========================
3
4 David S. Miller <davem@redhat.com>
5 Richard Henderson <rth@cygnus.com>
6 Jakub Jelinek <jakub@redhat.com>
7
8 This is a guide to device driver writers on how to use the DMA API
9 with example pseudo-code. For a concise description of the API, see
10 DMA-API.txt.
11
12 Most of the 64bit platforms have special hardware that translates bus
13 addresses (DMA addresses) into physical addresses. This is similar to
14 how page tables and/or a TLB translates virtual addresses to physical
15 addresses on a CPU. This is needed so that e.g. PCI devices can
16 access with a Single Address Cycle (32bit DMA address) any page in the
17 64bit physical address space. Previously in Linux those 64bit
18 platforms had to set artificial limits on the maximum RAM size in the
19 system, so that the virt_to_bus() static scheme works (the DMA address
20 translation tables were simply filled on bootup to map each bus
21 address to the physical page __pa(bus_to_virt())).
22
23 So that Linux can use the dynamic DMA mapping, it needs some help from the
24 drivers, namely it has to take into account that DMA addresses should be
25 mapped only for the time they are actually used and unmapped after the DMA
26 transfer.
27
28 The following API will work of course even on platforms where no such
29 hardware exists.
30
31 Note that the DMA API works with any bus independent of the underlying
32 microprocessor architecture. You should use the DMA API rather than
33 the bus specific DMA API (e.g. pci_dma_*).
34
35 First of all, you should make sure
36
37 #include <linux/dma-mapping.h>
38
39 is in your driver. This file will obtain for you the definition of the
40 dma_addr_t (which can hold any valid DMA address for the platform)
41 type which should be used everywhere you hold a DMA (bus) address
42 returned from the DMA mapping functions.
43
44 What memory is DMA'able?
45
46 The first piece of information you must know is what kernel memory can
47 be used with the DMA mapping facilities. There has been an unwritten
48 set of rules regarding this, and this text is an attempt to finally
49 write them down.
50
51 If you acquired your memory via the page allocator
52 (i.e. __get_free_page*()) or the generic memory allocators
53 (i.e. kmalloc() or kmem_cache_alloc()) then you may DMA to/from
54 that memory using the addresses returned from those routines.
55
56 This means specifically that you may _not_ use the memory/addresses
57 returned from vmalloc() for DMA. It is possible to DMA to the
58 _underlying_ memory mapped into a vmalloc() area, but this requires
59 walking page tables to get the physical addresses, and then
60 translating each of those pages back to a kernel address using
61 something like __va(). [ EDIT: Update this when we integrate
62 Gerd Knorr's generic code which does this. ]
63
64 This rule also means that you may use neither kernel image addresses
65 (items in data/text/bss segments), nor module image addresses, nor
66 stack addresses for DMA. These could all be mapped somewhere entirely
67 different than the rest of physical memory. Even if those classes of
68 memory could physically work with DMA, you'd need to ensure the I/O
69 buffers were cacheline-aligned. Without that, you'd see cacheline
70 sharing problems (data corruption) on CPUs with DMA-incoherent caches.
71 (The CPU could write to one word, DMA would write to a different one
72 in the same cache line, and one of them could be overwritten.)
73
74 Also, this means that you cannot take the return of a kmap()
75 call and DMA to/from that. This is similar to vmalloc().
76
77 What about block I/O and networking buffers? The block I/O and
78 networking subsystems make sure that the buffers they use are valid
79 for you to DMA from/to.
80
81 DMA addressing limitations
82
83 Does your device have any DMA addressing limitations? For example, is
84 your device only capable of driving the low order 24-bits of address?
85 If so, you need to inform the kernel of this fact.
86
87 By default, the kernel assumes that your device can address the full
88 32-bits. For a 64-bit capable device, this needs to be increased.
89 And for a device with limitations, as discussed in the previous
90 paragraph, it needs to be decreased.
91
92 Special note about PCI: PCI-X specification requires PCI-X devices to
93 support 64-bit addressing (DAC) for all transactions. And at least
94 one platform (SGI SN2) requires 64-bit consistent allocations to
95 operate correctly when the IO bus is in PCI-X mode.
96
97 For correct operation, you must interrogate the kernel in your device
98 probe routine to see if the DMA controller on the machine can properly
99 support the DMA addressing limitation your device has. It is good
100 style to do this even if your device holds the default setting,
101 because this shows that you did think about these issues wrt. your
102 device.
103
104 The query is performed via a call to dma_set_mask():
105
106 int dma_set_mask(struct device *dev, u64 mask);
107
108 The query for consistent allocations is performed via a call to
109 dma_set_coherent_mask():
110
111 int dma_set_coherent_mask(struct device *dev, u64 mask);
112
113 Here, dev is a pointer to the device struct of your device, and mask
114 is a bit mask describing which bits of an address your device
115 supports. It returns zero if your card can perform DMA properly on
116 the machine given the address mask you provided. In general, the
117 device struct of your device is embedded in the bus specific device
118 struct of your device. For example, a pointer to the device struct of
119 your PCI device is pdev->dev (pdev is a pointer to the PCI device
120 struct of your device).
121
122 If it returns non-zero, your device cannot perform DMA properly on
123 this platform, and attempting to do so will result in undefined
124 behavior. You must either use a different mask, or not use DMA.
125
126 This means that in the failure case, you have three options:
127
128 1) Use another DMA mask, if possible (see below).
129 2) Use some non-DMA mode for data transfer, if possible.
130 3) Ignore this device and do not initialize it.
131
132 It is recommended that your driver print a kernel KERN_WARNING message
133 when you end up performing either #2 or #3. In this manner, if a user
134 of your driver reports that performance is bad or that the device is not
135 even detected, you can ask them for the kernel messages to find out
136 exactly why.
137
138 The standard 32-bit addressing device would do something like this:
139
140 if (dma_set_mask(dev, DMA_BIT_MASK(32))) {
141 printk(KERN_WARNING
142 "mydev: No suitable DMA available.\n");
143 goto ignore_this_device;
144 }
145
146 Another common scenario is a 64-bit capable device. The approach here
147 is to try for 64-bit addressing, but back down to a 32-bit mask that
148 should not fail. The kernel may fail the 64-bit mask not because the
149 platform is not capable of 64-bit addressing. Rather, it may fail in
150 this case simply because 32-bit addressing is done more efficiently
151 than 64-bit addressing. For example, Sparc64 PCI SAC addressing is
152 more efficient than DAC addressing.
153
154 Here is how you would handle a 64-bit capable device which can drive
155 all 64-bits when accessing streaming DMA:
156
157 int using_dac;
158
159 if (!dma_set_mask(dev, DMA_BIT_MASK(64))) {
160 using_dac = 1;
161 } else if (!dma_set_mask(dev, DMA_BIT_MASK(32))) {
162 using_dac = 0;
163 } else {
164 printk(KERN_WARNING
165 "mydev: No suitable DMA available.\n");
166 goto ignore_this_device;
167 }
168
169 If a card is capable of using 64-bit consistent allocations as well,
170 the case would look like this:
171
172 int using_dac, consistent_using_dac;
173
174 if (!dma_set_mask(dev, DMA_BIT_MASK(64))) {
175 using_dac = 1;
176 consistent_using_dac = 1;
177 dma_set_coherent_mask(dev, DMA_BIT_MASK(64));
178 } else if (!dma_set_mask(dev, DMA_BIT_MASK(32))) {
179 using_dac = 0;
180 consistent_using_dac = 0;
181 dma_set_coherent_mask(dev, DMA_BIT_MASK(32));
182 } else {
183 printk(KERN_WARNING
184 "mydev: No suitable DMA available.\n");
185 goto ignore_this_device;
186 }
187
188 dma_set_coherent_mask() will always be able to set the same or a
189 smaller mask as dma_set_mask(). However for the rare case that a
190 device driver only uses consistent allocations, one would have to
191 check the return value from dma_set_coherent_mask().
192
193 Finally, if your device can only drive the low 24-bits of
194 address you might do something like:
195
196 if (dma_set_mask(dev, DMA_BIT_MASK(24))) {
197 printk(KERN_WARNING
198 "mydev: 24-bit DMA addressing not available.\n");
199 goto ignore_this_device;
200 }
201
202 When dma_set_mask() is successful, and returns zero, the kernel saves
203 away this mask you have provided. The kernel will use this
204 information later when you make DMA mappings.
205
206 There is a case which we are aware of at this time, which is worth
207 mentioning in this documentation. If your device supports multiple
208 functions (for example a sound card provides playback and record
209 functions) and the various different functions have _different_
210 DMA addressing limitations, you may wish to probe each mask and
211 only provide the functionality which the machine can handle. It
212 is important that the last call to dma_set_mask() be for the
213 most specific mask.
214
215 Here is pseudo-code showing how this might be done:
216
217 #define PLAYBACK_ADDRESS_BITS DMA_BIT_MASK(32)
218 #define RECORD_ADDRESS_BITS DMA_BIT_MASK(24)
219
220 struct my_sound_card *card;
221 struct device *dev;
222
223 ...
224 if (!dma_set_mask(dev, PLAYBACK_ADDRESS_BITS)) {
225 card->playback_enabled = 1;
226 } else {
227 card->playback_enabled = 0;
228 printk(KERN_WARNING "%s: Playback disabled due to DMA limitations.\n",
229 card->name);
230 }
231 if (!dma_set_mask(dev, RECORD_ADDRESS_BITS)) {
232 card->record_enabled = 1;
233 } else {
234 card->record_enabled = 0;
235 printk(KERN_WARNING "%s: Record disabled due to DMA limitations.\n",
236 card->name);
237 }
238
239 A sound card was used as an example here because this genre of PCI
240 devices seems to be littered with ISA chips given a PCI front end,
241 and thus retaining the 16MB DMA addressing limitations of ISA.
242
243 Types of DMA mappings
244
245 There are two types of DMA mappings:
246
247 - Consistent DMA mappings which are usually mapped at driver
248 initialization, unmapped at the end and for which the hardware should
249 guarantee that the device and the CPU can access the data
250 in parallel and will see updates made by each other without any
251 explicit software flushing.
252
253 Think of "consistent" as "synchronous" or "coherent".
254
255 The current default is to return consistent memory in the low 32
256 bits of the bus space. However, for future compatibility you should
257 set the consistent mask even if this default is fine for your
258 driver.
259
260 Good examples of what to use consistent mappings for are:
261
262 - Network card DMA ring descriptors.
263 - SCSI adapter mailbox command data structures.
264 - Device firmware microcode executed out of
265 main memory.
266
267 The invariant these examples all require is that any CPU store
268 to memory is immediately visible to the device, and vice
269 versa. Consistent mappings guarantee this.
270
271 IMPORTANT: Consistent DMA memory does not preclude the usage of
272 proper memory barriers. The CPU may reorder stores to
273 consistent memory just as it may normal memory. Example:
274 if it is important for the device to see the first word
275 of a descriptor updated before the second, you must do
276 something like:
277
278 desc->word0 = address;
279 wmb();
280 desc->word1 = DESC_VALID;
281
282 in order to get correct behavior on all platforms.
283
284 Also, on some platforms your driver may need to flush CPU write
285 buffers in much the same way as it needs to flush write buffers
286 found in PCI bridges (such as by reading a register's value
287 after writing it).
288
289 - Streaming DMA mappings which are usually mapped for one DMA
290 transfer, unmapped right after it (unless you use dma_sync_* below)
291 and for which hardware can optimize for sequential accesses.
292
293 This of "streaming" as "asynchronous" or "outside the coherency
294 domain".
295
296 Good examples of what to use streaming mappings for are:
297
298 - Networking buffers transmitted/received by a device.
299 - Filesystem buffers written/read by a SCSI device.
300
301 The interfaces for using this type of mapping were designed in
302 such a way that an implementation can make whatever performance
303 optimizations the hardware allows. To this end, when using
304 such mappings you must be explicit about what you want to happen.
305
306 Neither type of DMA mapping has alignment restrictions that come from
307 the underlying bus, although some devices may have such restrictions.
308 Also, systems with caches that aren't DMA-coherent will work better
309 when the underlying buffers don't share cache lines with other data.
310
311
312 Using Consistent DMA mappings.
313
314 To allocate and map large (PAGE_SIZE or so) consistent DMA regions,
315 you should do:
316
317 dma_addr_t dma_handle;
318
319 cpu_addr = dma_alloc_coherent(dev, size, &dma_handle, gfp);
320
321 where device is a struct device *. This may be called in interrupt
322 context with the GFP_ATOMIC flag.
323
324 Size is the length of the region you want to allocate, in bytes.
325
326 This routine will allocate RAM for that region, so it acts similarly to
327 __get_free_pages (but takes size instead of a page order). If your
328 driver needs regions sized smaller than a page, you may prefer using
329 the dma_pool interface, described below.
330
331 The consistent DMA mapping interfaces, for non-NULL dev, will by
332 default return a DMA address which is 32-bit addressable. Even if the
333 device indicates (via DMA mask) that it may address the upper 32-bits,
334 consistent allocation will only return > 32-bit addresses for DMA if
335 the consistent DMA mask has been explicitly changed via
336 dma_set_coherent_mask(). This is true of the dma_pool interface as
337 well.
338
339 dma_alloc_coherent returns two values: the virtual address which you
340 can use to access it from the CPU and dma_handle which you pass to the
341 card.
342
343 The cpu return address and the DMA bus master address are both
344 guaranteed to be aligned to the smallest PAGE_SIZE order which
345 is greater than or equal to the requested size. This invariant
346 exists (for example) to guarantee that if you allocate a chunk
347 which is smaller than or equal to 64 kilobytes, the extent of the
348 buffer you receive will not cross a 64K boundary.
349
350 To unmap and free such a DMA region, you call:
351
352 dma_free_coherent(dev, size, cpu_addr, dma_handle);
353
354 where dev, size are the same as in the above call and cpu_addr and
355 dma_handle are the values dma_alloc_coherent returned to you.
356 This function may not be called in interrupt context.
357
358 If your driver needs lots of smaller memory regions, you can write
359 custom code to subdivide pages returned by dma_alloc_coherent,
360 or you can use the dma_pool API to do that. A dma_pool is like
361 a kmem_cache, but it uses dma_alloc_coherent not __get_free_pages.
362 Also, it understands common hardware constraints for alignment,
363 like queue heads needing to be aligned on N byte boundaries.
364
365 Create a dma_pool like this:
366
367 struct dma_pool *pool;
368
369 pool = dma_pool_create(name, dev, size, align, alloc);
370
371 The "name" is for diagnostics (like a kmem_cache name); dev and size
372 are as above. The device's hardware alignment requirement for this
373 type of data is "align" (which is expressed in bytes, and must be a
374 power of two). If your device has no boundary crossing restrictions,
375 pass 0 for alloc; passing 4096 says memory allocated from this pool
376 must not cross 4KByte boundaries (but at that time it may be better to
377 go for dma_alloc_coherent directly instead).
378
379 Allocate memory from a dma pool like this:
380
381 cpu_addr = dma_pool_alloc(pool, flags, &dma_handle);
382
383 flags are SLAB_KERNEL if blocking is permitted (not in_interrupt nor
384 holding SMP locks), SLAB_ATOMIC otherwise. Like dma_alloc_coherent,
385 this returns two values, cpu_addr and dma_handle.
386
387 Free memory that was allocated from a dma_pool like this:
388
389 dma_pool_free(pool, cpu_addr, dma_handle);
390
391 where pool is what you passed to dma_pool_alloc, and cpu_addr and
392 dma_handle are the values dma_pool_alloc returned. This function
393 may be called in interrupt context.
394
395 Destroy a dma_pool by calling:
396
397 dma_pool_destroy(pool);
398
399 Make sure you've called dma_pool_free for all memory allocated
400 from a pool before you destroy the pool. This function may not
401 be called in interrupt context.
402
403 DMA Direction
404
405 The interfaces described in subsequent portions of this document
406 take a DMA direction argument, which is an integer and takes on
407 one of the following values:
408
409 DMA_BIDIRECTIONAL
410 DMA_TO_DEVICE
411 DMA_FROM_DEVICE
412 DMA_NONE
413
414 One should provide the exact DMA direction if you know it.
415
416 DMA_TO_DEVICE means "from main memory to the device"
417 DMA_FROM_DEVICE means "from the device to main memory"
418 It is the direction in which the data moves during the DMA
419 transfer.
420
421 You are _strongly_ encouraged to specify this as precisely
422 as you possibly can.
423
424 If you absolutely cannot know the direction of the DMA transfer,
425 specify DMA_BIDIRECTIONAL. It means that the DMA can go in
426 either direction. The platform guarantees that you may legally
427 specify this, and that it will work, but this may be at the
428 cost of performance for example.
429
430 The value DMA_NONE is to be used for debugging. One can
431 hold this in a data structure before you come to know the
432 precise direction, and this will help catch cases where your
433 direction tracking logic has failed to set things up properly.
434
435 Another advantage of specifying this value precisely (outside of
436 potential platform-specific optimizations of such) is for debugging.
437 Some platforms actually have a write permission boolean which DMA
438 mappings can be marked with, much like page protections in the user
439 program address space. Such platforms can and do report errors in the
440 kernel logs when the DMA controller hardware detects violation of the
441 permission setting.
442
443 Only streaming mappings specify a direction, consistent mappings
444 implicitly have a direction attribute setting of
445 DMA_BIDIRECTIONAL.
446
447 The SCSI subsystem tells you the direction to use in the
448 'sc_data_direction' member of the SCSI command your driver is
449 working on.
450
451 For Networking drivers, it's a rather simple affair. For transmit
452 packets, map/unmap them with the DMA_TO_DEVICE direction
453 specifier. For receive packets, just the opposite, map/unmap them
454 with the DMA_FROM_DEVICE direction specifier.
455
456 Using Streaming DMA mappings
457
458 The streaming DMA mapping routines can be called from interrupt
459 context. There are two versions of each map/unmap, one which will
460 map/unmap a single memory region, and one which will map/unmap a
461 scatterlist.
462
463 To map a single region, you do:
464
465 struct device *dev = &my_dev->dev;
466 dma_addr_t dma_handle;
467 void *addr = buffer->ptr;
468 size_t size = buffer->len;
469
470 dma_handle = dma_map_single(dev, addr, size, direction);
471 if (dma_mapping_error(dma_handle)) {
472 /*
473 * reduce current DMA mapping usage,
474 * delay and try again later or
475 * reset driver.
476 */
477 goto map_error_handling;
478 }
479
480 and to unmap it:
481
482 dma_unmap_single(dev, dma_handle, size, direction);
483
484 You should call dma_mapping_error() as dma_map_single() could fail and return
485 error. Not all dma implementations support dma_mapping_error() interface.
486 However, it is a good practice to call dma_mapping_error() interface, which
487 will invoke the generic mapping error check interface. Doing so will ensure
488 that the mapping code will work correctly on all dma implementations without
489 any dependency on the specifics of the underlying implementation. Using the
490 returned address without checking for errors could result in failures ranging
491 from panics to silent data corruption. A couple of examples of incorrect ways
492 to check for errors that make assumptions about the underlying dma
493 implementation are as follows and these are applicable to dma_map_page() as
494 well.
495
496 Incorrect example 1:
497 dma_addr_t dma_handle;
498
499 dma_handle = dma_map_single(dev, addr, size, direction);
500 if ((dma_handle & 0xffff != 0) || (dma_handle >= 0x1000000)) {
501 goto map_error;
502 }
503
504 Incorrect example 2:
505 dma_addr_t dma_handle;
506
507 dma_handle = dma_map_single(dev, addr, size, direction);
508 if (dma_handle == DMA_ERROR_CODE) {
509 goto map_error;
510 }
511
512 You should call dma_unmap_single when the DMA activity is finished, e.g.
513 from the interrupt which told you that the DMA transfer is done.
514
515 Using cpu pointers like this for single mappings has a disadvantage,
516 you cannot reference HIGHMEM memory in this way. Thus, there is a
517 map/unmap interface pair akin to dma_{map,unmap}_single. These
518 interfaces deal with page/offset pairs instead of cpu pointers.
519 Specifically:
520
521 struct device *dev = &my_dev->dev;
522 dma_addr_t dma_handle;
523 struct page *page = buffer->page;
524 unsigned long offset = buffer->offset;
525 size_t size = buffer->len;
526
527 dma_handle = dma_map_page(dev, page, offset, size, direction);
528 if (dma_mapping_error(dma_handle)) {
529 /*
530 * reduce current DMA mapping usage,
531 * delay and try again later or
532 * reset driver.
533 */
534 goto map_error_handling;
535 }
536
537 ...
538
539 dma_unmap_page(dev, dma_handle, size, direction);
540
541 Here, "offset" means byte offset within the given page.
542
543 You should call dma_mapping_error() as dma_map_page() could fail and return
544 error as outlined under the dma_map_single() discussion.
545
546 You should call dma_unmap_page when the DMA activity is finished, e.g.
547 from the interrupt which told you that the DMA transfer is done.
548
549 With scatterlists, you map a region gathered from several regions by:
550
551 int i, count = dma_map_sg(dev, sglist, nents, direction);
552 struct scatterlist *sg;
553
554 for_each_sg(sglist, sg, count, i) {
555 hw_address[i] = sg_dma_address(sg);
556 hw_len[i] = sg_dma_len(sg);
557 }
558
559 where nents is the number of entries in the sglist.
560
561 The implementation is free to merge several consecutive sglist entries
562 into one (e.g. if DMA mapping is done with PAGE_SIZE granularity, any
563 consecutive sglist entries can be merged into one provided the first one
564 ends and the second one starts on a page boundary - in fact this is a huge
565 advantage for cards which either cannot do scatter-gather or have very
566 limited number of scatter-gather entries) and returns the actual number
567 of sg entries it mapped them to. On failure 0 is returned.
568
569 Then you should loop count times (note: this can be less than nents times)
570 and use sg_dma_address() and sg_dma_len() macros where you previously
571 accessed sg->address and sg->length as shown above.
572
573 To unmap a scatterlist, just call:
574
575 dma_unmap_sg(dev, sglist, nents, direction);
576
577 Again, make sure DMA activity has already finished.
578
579 PLEASE NOTE: The 'nents' argument to the dma_unmap_sg call must be
580 the _same_ one you passed into the dma_map_sg call,
581 it should _NOT_ be the 'count' value _returned_ from the
582 dma_map_sg call.
583
584 Every dma_map_{single,sg} call should have its dma_unmap_{single,sg}
585 counterpart, because the bus address space is a shared resource (although
586 in some ports the mapping is per each BUS so less devices contend for the
587 same bus address space) and you could render the machine unusable by eating
588 all bus addresses.
589
590 If you need to use the same streaming DMA region multiple times and touch
591 the data in between the DMA transfers, the buffer needs to be synced
592 properly in order for the cpu and device to see the most uptodate and
593 correct copy of the DMA buffer.
594
595 So, firstly, just map it with dma_map_{single,sg}, and after each DMA
596 transfer call either:
597
598 dma_sync_single_for_cpu(dev, dma_handle, size, direction);
599
600 or:
601
602 dma_sync_sg_for_cpu(dev, sglist, nents, direction);
603
604 as appropriate.
605
606 Then, if you wish to let the device get at the DMA area again,
607 finish accessing the data with the cpu, and then before actually
608 giving the buffer to the hardware call either:
609
610 dma_sync_single_for_device(dev, dma_handle, size, direction);
611
612 or:
613
614 dma_sync_sg_for_device(dev, sglist, nents, direction);
615
616 as appropriate.
617
618 After the last DMA transfer call one of the DMA unmap routines
619 dma_unmap_{single,sg}. If you don't touch the data from the first dma_map_*
620 call till dma_unmap_*, then you don't have to call the dma_sync_*
621 routines at all.
622
623 Here is pseudo code which shows a situation in which you would need
624 to use the dma_sync_*() interfaces.
625
626 my_card_setup_receive_buffer(struct my_card *cp, char *buffer, int len)
627 {
628 dma_addr_t mapping;
629
630 mapping = dma_map_single(cp->dev, buffer, len, DMA_FROM_DEVICE);
631 if (dma_mapping_error(dma_handle)) {
632 /*
633 * reduce current DMA mapping usage,
634 * delay and try again later or
635 * reset driver.
636 */
637 goto map_error_handling;
638 }
639
640 cp->rx_buf = buffer;
641 cp->rx_len = len;
642 cp->rx_dma = mapping;
643
644 give_rx_buf_to_card(cp);
645 }
646
647 ...
648
649 my_card_interrupt_handler(int irq, void *devid, struct pt_regs *regs)
650 {
651 struct my_card *cp = devid;
652
653 ...
654 if (read_card_status(cp) == RX_BUF_TRANSFERRED) {
655 struct my_card_header *hp;
656
657 /* Examine the header to see if we wish
658 * to accept the data. But synchronize
659 * the DMA transfer with the CPU first
660 * so that we see updated contents.
661 */
662 dma_sync_single_for_cpu(&cp->dev, cp->rx_dma,
663 cp->rx_len,
664 DMA_FROM_DEVICE);
665
666 /* Now it is safe to examine the buffer. */
667 hp = (struct my_card_header *) cp->rx_buf;
668 if (header_is_ok(hp)) {
669 dma_unmap_single(&cp->dev, cp->rx_dma, cp->rx_len,
670 DMA_FROM_DEVICE);
671 pass_to_upper_layers(cp->rx_buf);
672 make_and_setup_new_rx_buf(cp);
673 } else {
674 /* CPU should not write to
675 * DMA_FROM_DEVICE-mapped area,
676 * so dma_sync_single_for_device() is
677 * not needed here. It would be required
678 * for DMA_BIDIRECTIONAL mapping if
679 * the memory was modified.
680 */
681 give_rx_buf_to_card(cp);
682 }
683 }
684 }
685
686 Drivers converted fully to this interface should not use virt_to_bus any
687 longer, nor should they use bus_to_virt. Some drivers have to be changed a
688 little bit, because there is no longer an equivalent to bus_to_virt in the
689 dynamic DMA mapping scheme - you have to always store the DMA addresses
690 returned by the dma_alloc_coherent, dma_pool_alloc, and dma_map_single
691 calls (dma_map_sg stores them in the scatterlist itself if the platform
692 supports dynamic DMA mapping in hardware) in your driver structures and/or
693 in the card registers.
694
695 All drivers should be using these interfaces with no exceptions. It
696 is planned to completely remove virt_to_bus() and bus_to_virt() as
697 they are entirely deprecated. Some ports already do not provide these
698 as it is impossible to correctly support them.
699
700 Handling Errors
701
702 DMA address space is limited on some architectures and an allocation
703 failure can be determined by:
704
705 - checking if dma_alloc_coherent returns NULL or dma_map_sg returns 0
706
707 - checking the returned dma_addr_t of dma_map_single and dma_map_page
708 by using dma_mapping_error():
709
710 dma_addr_t dma_handle;
711
712 dma_handle = dma_map_single(dev, addr, size, direction);
713 if (dma_mapping_error(dev, dma_handle)) {
714 /*
715 * reduce current DMA mapping usage,
716 * delay and try again later or
717 * reset driver.
718 */
719 goto map_error_handling;
720 }
721
722 - unmap pages that are already mapped, when mapping error occurs in the middle
723 of a multiple page mapping attempt. These example are applicable to
724 dma_map_page() as well.
725
726 Example 1:
727 dma_addr_t dma_handle1;
728 dma_addr_t dma_handle2;
729
730 dma_handle1 = dma_map_single(dev, addr, size, direction);
731 if (dma_mapping_error(dev, dma_handle1)) {
732 /*
733 * reduce current DMA mapping usage,
734 * delay and try again later or
735 * reset driver.
736 */
737 goto map_error_handling1;
738 }
739 dma_handle2 = dma_map_single(dev, addr, size, direction);
740 if (dma_mapping_error(dev, dma_handle2)) {
741 /*
742 * reduce current DMA mapping usage,
743 * delay and try again later or
744 * reset driver.
745 */
746 goto map_error_handling2;
747 }
748
749 ...
750
751 map_error_handling2:
752 dma_unmap_single(dma_handle1);
753 map_error_handling1:
754
755 Example 2: (if buffers are allocated in a loop, unmap all mapped buffers when
756 mapping error is detected in the middle)
757
758 dma_addr_t dma_addr;
759 dma_addr_t array[DMA_BUFFERS];
760 int save_index = 0;
761
762 for (i = 0; i < DMA_BUFFERS; i++) {
763
764 ...
765
766 dma_addr = dma_map_single(dev, addr, size, direction);
767 if (dma_mapping_error(dev, dma_addr)) {
768 /*
769 * reduce current DMA mapping usage,
770 * delay and try again later or
771 * reset driver.
772 */
773 goto map_error_handling;
774 }
775 array[i].dma_addr = dma_addr;
776 save_index++;
777 }
778
779 ...
780
781 map_error_handling:
782
783 for (i = 0; i < save_index; i++) {
784
785 ...
786
787 dma_unmap_single(array[i].dma_addr);
788 }
789
790 Networking drivers must call dev_kfree_skb to free the socket buffer
791 and return NETDEV_TX_OK if the DMA mapping fails on the transmit hook
792 (ndo_start_xmit). This means that the socket buffer is just dropped in
793 the failure case.
794
795 SCSI drivers must return SCSI_MLQUEUE_HOST_BUSY if the DMA mapping
796 fails in the queuecommand hook. This means that the SCSI subsystem
797 passes the command to the driver again later.
798
799 Optimizing Unmap State Space Consumption
800
801 On many platforms, dma_unmap_{single,page}() is simply a nop.
802 Therefore, keeping track of the mapping address and length is a waste
803 of space. Instead of filling your drivers up with ifdefs and the like
804 to "work around" this (which would defeat the whole purpose of a
805 portable API) the following facilities are provided.
806
807 Actually, instead of describing the macros one by one, we'll
808 transform some example code.
809
810 1) Use DEFINE_DMA_UNMAP_{ADDR,LEN} in state saving structures.
811 Example, before:
812
813 struct ring_state {
814 struct sk_buff *skb;
815 dma_addr_t mapping;
816 __u32 len;
817 };
818
819 after:
820
821 struct ring_state {
822 struct sk_buff *skb;
823 DEFINE_DMA_UNMAP_ADDR(mapping);
824 DEFINE_DMA_UNMAP_LEN(len);
825 };
826
827 2) Use dma_unmap_{addr,len}_set to set these values.
828 Example, before:
829
830 ringp->mapping = FOO;
831 ringp->len = BAR;
832
833 after:
834
835 dma_unmap_addr_set(ringp, mapping, FOO);
836 dma_unmap_len_set(ringp, len, BAR);
837
838 3) Use dma_unmap_{addr,len} to access these values.
839 Example, before:
840
841 dma_unmap_single(dev, ringp->mapping, ringp->len,
842 DMA_FROM_DEVICE);
843
844 after:
845
846 dma_unmap_single(dev,
847 dma_unmap_addr(ringp, mapping),
848 dma_unmap_len(ringp, len),
849 DMA_FROM_DEVICE);
850
851 It really should be self-explanatory. We treat the ADDR and LEN
852 separately, because it is possible for an implementation to only
853 need the address in order to perform the unmap operation.
854
855 Platform Issues
856
857 If you are just writing drivers for Linux and do not maintain
858 an architecture port for the kernel, you can safely skip down
859 to "Closing".
860
861 1) Struct scatterlist requirements.
862
863 Don't invent the architecture specific struct scatterlist; just use
864 <asm-generic/scatterlist.h>. You need to enable
865 CONFIG_NEED_SG_DMA_LENGTH if the architecture supports IOMMUs
866 (including software IOMMU).
867
868 2) ARCH_DMA_MINALIGN
869
870 Architectures must ensure that kmalloc'ed buffer is
871 DMA-safe. Drivers and subsystems depend on it. If an architecture
872 isn't fully DMA-coherent (i.e. hardware doesn't ensure that data in
873 the CPU cache is identical to data in main memory),
874 ARCH_DMA_MINALIGN must be set so that the memory allocator
875 makes sure that kmalloc'ed buffer doesn't share a cache line with
876 the others. See arch/arm/include/asm/cache.h as an example.
877
878 Note that ARCH_DMA_MINALIGN is about DMA memory alignment
879 constraints. You don't need to worry about the architecture data
880 alignment constraints (e.g. the alignment constraints about 64-bit
881 objects).
882
883 3) Supporting multiple types of IOMMUs
884
885 If your architecture needs to support multiple types of IOMMUs, you
886 can use include/linux/asm-generic/dma-mapping-common.h. It's a
887 library to support the DMA API with multiple types of IOMMUs. Lots
888 of architectures (x86, powerpc, sh, alpha, ia64, microblaze and
889 sparc) use it. Choose one to see how it can be used. If you need to
890 support multiple types of IOMMUs in a single system, the example of
891 x86 or powerpc helps.
892
893 Closing
894
895 This document, and the API itself, would not be in its current
896 form without the feedback and suggestions from numerous individuals.
897 We would like to specifically mention, in no particular order, the
898 following people:
899
900 Russell King <rmk@arm.linux.org.uk>
901 Leo Dagum <dagum@barrel.engr.sgi.com>
902 Ralf Baechle <ralf@oss.sgi.com>
903 Grant Grundler <grundler@cup.hp.com>
904 Jay Estabrook <Jay.Estabrook@compaq.com>
905 Thomas Sailer <sailer@ife.ee.ethz.ch>
906 Andrea Arcangeli <andrea@suse.de>
907 Jens Axboe <jens.axboe@oracle.com>
908 David Mosberger-Tang <davidm@hpl.hp.com>