Merge 4.14.99 into android-4.14-p
[GitHub/moto-9609/android_kernel_motorola_exynos9610.git] / Documentation / cachetlb.txt
CommitLineData
fdefdbca
MCC
1==================================
2Cache and TLB Flushing Under Linux
3==================================
1da177e4 4
fdefdbca 5:Author: David S. Miller <davem@redhat.com>
1da177e4
LT
6
7This document describes the cache/tlb flushing interfaces called
8by the Linux VM subsystem. It enumerates over each interface,
a33f3224 9describes its intended purpose, and what side effect is expected
1da177e4
LT
10after the interface is invoked.
11
12The side effects described below are stated for a uniprocessor
13implementation, and what is to happen on that single processor. The
14SMP cases are a simple extension, in that you just extend the
15definition such that the side effect for a particular interface occurs
16on all processors in the system. Don't let this scare you into
17thinking SMP cache/tlb flushing must be so inefficient, this is in
18fact an area where many optimizations are possible. For example,
19if it can be proven that a user address space has never executed
de03c72c 20on a cpu (see mm_cpumask()), one need not perform a flush
1da177e4
LT
21for this address space on that cpu.
22
23First, the TLB flushing interfaces, since they are the simplest. The
24"TLB" is abstracted under Linux as something the cpu uses to cache
25virtual-->physical address translations obtained from the software
26page tables. Meaning that if the software page tables change, it is
27possible for stale translations to exist in this "TLB" cache.
28Therefore when software page table changes occur, the kernel will
29invoke one of the following flush methods _after_ the page table
30changes occur:
31
fdefdbca 321) ``void flush_tlb_all(void)``
1da177e4
LT
33
34 The most severe flush of all. After this interface runs,
35 any previous page table modification whatsoever will be
36 visible to the cpu.
37
38 This is usually invoked when the kernel page tables are
39 changed, since such translations are "global" in nature.
40
fdefdbca 412) ``void flush_tlb_mm(struct mm_struct *mm)``
1da177e4
LT
42
43 This interface flushes an entire user address space from
44 the TLB. After running, this interface must make sure that
45 any previous page table modifications for the address space
46 'mm' will be visible to the cpu. That is, after running,
47 there will be no entries in the TLB for 'mm'.
48
49 This interface is used to handle whole address space
50 page table operations such as what happens during
51 fork, and exec.
52
fdefdbca
MCC
533) ``void flush_tlb_range(struct vm_area_struct *vma,
54 unsigned long start, unsigned long end)``
1da177e4
LT
55
56 Here we are flushing a specific range of (user) virtual
57 address translations from the TLB. After running, this
58 interface must make sure that any previous page table
59 modifications for the address space 'vma->vm_mm' in the range
60 'start' to 'end-1' will be visible to the cpu. That is, after
548a1950 61 running, there will be no entries in the TLB for 'mm' for
1da177e4
LT
62 virtual addresses in the range 'start' to 'end-1'.
63
64 The "vma" is the backing store being used for the region.
65 Primarily, this is used for munmap() type operations.
66
67 The interface is provided in hopes that the port can find
68 a suitably efficient method for removing multiple page
69 sized translations from the TLB, instead of having the kernel
70 call flush_tlb_page (see below) for each entry which may be
71 modified.
72
fdefdbca 734) ``void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)``
1da177e4
LT
74
75 This time we need to remove the PAGE_SIZE sized translation
76 from the TLB. The 'vma' is the backing structure used by
77 Linux to keep track of mmap'd regions for a process, the
78 address space is available via vma->vm_mm. Also, one may
79 test (vma->vm_flags & VM_EXEC) to see if this region is
80 executable (and thus could be in the 'instruction TLB' in
81 split-tlb type setups).
82
83 After running, this interface must make sure that any previous
84 page table modification for address space 'vma->vm_mm' for
85 user virtual address 'addr' will be visible to the cpu. That
86 is, after running, there will be no entries in the TLB for
87 'vma->vm_mm' for virtual address 'addr'.
88
89 This is used primarily during fault processing.
90
fdefdbca
MCC
915) ``void update_mmu_cache(struct vm_area_struct *vma,
92 unsigned long address, pte_t *ptep)``
1da177e4
LT
93
94 At the end of every page fault, this routine is invoked to
95 tell the architecture specific code that a translation
4b3073e1
RK
96 now exists at virtual address "address" for address space
97 "vma->vm_mm", in the software page tables.
1da177e4
LT
98
99 A port may use this information in any way it so chooses.
100 For example, it could use this event to pre-load TLB
101 translations for software managed TLB configurations.
102 The sparc64 port currently does this.
103
fdefdbca 1046) ``void tlb_migrate_finish(struct mm_struct *mm)``
1da177e4
LT
105
106 This interface is called at the end of an explicit
107 process migration. This interface provides a hook
108 to allow a platform to update TLB or context-specific
109 information for the address space.
110
111 The ia64 sn2 platform is one example of a platform
112 that uses this interface.
113
1da177e4
LT
114Next, we have the cache flushing interfaces. In general, when Linux
115is changing an existing virtual-->physical mapping to a new value,
fdefdbca 116the sequence will be in one of the following forms::
1da177e4
LT
117
118 1) flush_cache_mm(mm);
119 change_all_page_tables_of(mm);
120 flush_tlb_mm(mm);
121
122 2) flush_cache_range(vma, start, end);
123 change_range_of_page_tables(mm, start, end);
124 flush_tlb_range(vma, start, end);
125
126 3) flush_cache_page(vma, addr, pfn);
127 set_pte(pte_pointer, new_pte_val);
128 flush_tlb_page(vma, addr);
129
130The cache level flush will always be first, because this allows
131us to properly handle systems whose caches are strict and require
132a virtual-->physical translation to exist for a virtual address
133when that virtual address is flushed from the cache. The HyperSparc
134cpu is one such cpu with this attribute.
135
136The cache flushing routines below need only deal with cache flushing
137to the extent that it is necessary for a particular cpu. Mostly,
138these routines must be implemented for cpus which have virtually
139indexed caches which must be flushed when virtual-->physical
140translations are changed or removed. So, for example, the physically
141indexed physically tagged caches of IA32 processors have no need to
142implement these interfaces since the caches are fully synchronized
143and have no dependency on translation information.
144
145Here are the routines, one by one:
146
fdefdbca 1471) ``void flush_cache_mm(struct mm_struct *mm)``
1da177e4
LT
148
149 This interface flushes an entire user address space from
150 the caches. That is, after running, there will be no cache
151 lines associated with 'mm'.
152
153 This interface is used to handle whole address space
ec8c0446
RB
154 page table operations such as what happens during exit and exec.
155
fdefdbca 1562) ``void flush_cache_dup_mm(struct mm_struct *mm)``
ec8c0446
RB
157
158 This interface flushes an entire user address space from
159 the caches. That is, after running, there will be no cache
160 lines associated with 'mm'.
161
162 This interface is used to handle whole address space
163 page table operations such as what happens during fork.
164
165 This option is separate from flush_cache_mm to allow some
166 optimizations for VIPT caches.
1da177e4 167
fdefdbca
MCC
1683) ``void flush_cache_range(struct vm_area_struct *vma,
169 unsigned long start, unsigned long end)``
1da177e4
LT
170
171 Here we are flushing a specific range of (user) virtual
172 addresses from the cache. After running, there will be no
173 entries in the cache for 'vma->vm_mm' for virtual addresses in
174 the range 'start' to 'end-1'.
175
176 The "vma" is the backing store being used for the region.
177 Primarily, this is used for munmap() type operations.
178
179 The interface is provided in hopes that the port can find
180 a suitably efficient method for removing multiple page
181 sized regions from the cache, instead of having the kernel
182 call flush_cache_page (see below) for each entry which may be
183 modified.
184
fdefdbca 1854) ``void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn)``
1da177e4
LT
186
187 This time we need to remove a PAGE_SIZE sized range
188 from the cache. The 'vma' is the backing structure used by
189 Linux to keep track of mmap'd regions for a process, the
190 address space is available via vma->vm_mm. Also, one may
191 test (vma->vm_flags & VM_EXEC) to see if this region is
192 executable (and thus could be in the 'instruction cache' in
193 "Harvard" type cache layouts).
194
195 The 'pfn' indicates the physical page frame (shift this value
196 left by PAGE_SHIFT to get the physical address) that 'addr'
197 translates to. It is this mapping which should be removed from
198 the cache.
199
200 After running, there will be no entries in the cache for
201 'vma->vm_mm' for virtual address 'addr' which translates
202 to 'pfn'.
203
204 This is used primarily during fault processing.
205
fdefdbca 2065) ``void flush_cache_kmaps(void)``
1da177e4
LT
207
208 This routine need only be implemented if the platform utilizes
209 highmem. It will be called right before all of the kmaps
210 are invalidated.
211
212 After running, there will be no entries in the cache for
213 the kernel virtual address range PKMAP_ADDR(0) to
214 PKMAP_ADDR(LAST_PKMAP).
215
216 This routing should be implemented in asm/highmem.h
217
fdefdbca
MCC
2186) ``void flush_cache_vmap(unsigned long start, unsigned long end)``
219 ``void flush_cache_vunmap(unsigned long start, unsigned long end)``
1da177e4
LT
220
221 Here in these two interfaces we are flushing a specific range
222 of (kernel) virtual addresses from the cache. After running,
223 there will be no entries in the cache for the kernel address
224 space for virtual addresses in the range 'start' to 'end-1'.
225
226 The first of these two routines is invoked after map_vm_area()
227 has installed the page table entries. The second is invoked
c19c03fc 228 before unmap_kernel_range() deletes the page table entries.
1da177e4
LT
229
230There exists another whole class of cpu cache issues which currently
231require a whole different set of interfaces to handle properly.
232The biggest problem is that of virtual aliasing in the data cache
233of a processor.
234
a33f3224 235Is your port susceptible to virtual aliasing in its D-cache?
1da177e4
LT
236Well, if your D-cache is virtually indexed, is larger in size than
237PAGE_SIZE, and does not prevent multiple cache lines for the same
238physical address from existing at once, you have this problem.
239
240If your D-cache has this problem, first define asm/shmparam.h SHMLBA
241properly, it should essentially be the size of your virtually
242addressed D-cache (or if the size is variable, the largest possible
243size). This setting will force the SYSv IPC layer to only allow user
244processes to mmap shared memory at address which are a multiple of
245this value.
246
fdefdbca
MCC
247.. note::
248
249 This does not fix shared mmaps, check out the sparc64 port for
250 one way to solve this (in particular SPARC_FLAG_MMAPSHARED).
1da177e4
LT
251
252Next, you have to solve the D-cache aliasing issue for all
253other cases. Please keep in mind that fact that, for a given page
254mapped into some user address space, there is always at least one more
a33f3224 255mapping, that of the kernel in its linear mapping starting at
1da177e4
LT
256PAGE_OFFSET. So immediately, once the first user maps a given
257physical page into its address space, by implication the D-cache
258aliasing problem has the potential to exist since the kernel already
259maps this page at its virtual address.
260
fdefdbca
MCC
261 ``void copy_user_page(void *to, void *from, unsigned long addr, struct page *page)``
262 ``void clear_user_page(void *to, unsigned long addr, struct page *page)``
1da177e4
LT
263
264 These two routines store data in user anonymous or COW
265 pages. It allows a port to efficiently avoid D-cache alias
266 issues between userspace and the kernel.
267
268 For example, a port may temporarily map 'from' and 'to' to
269 kernel virtual addresses during the copy. The virtual address
270 for these two pages is chosen in such a way that the kernel
271 load/store instructions happen to virtual addresses which are
272 of the same "color" as the user mapping of the page. Sparc64
273 for example, uses this technique.
274
275 The 'addr' parameter tells the virtual address where the
276 user will ultimately have this page mapped, and the 'page'
277 parameter gives a pointer to the struct page of the target.
278
279 If D-cache aliasing is not an issue, these two routines may
280 simply call memcpy/memset directly and do nothing more.
281
fdefdbca 282 ``void flush_dcache_page(struct page *page)``
1da177e4
LT
283
284 Any time the kernel writes to a page cache page, _OR_
285 the kernel is about to read from a page cache page and
286 user space shared/writable mappings of this page potentially
287 exist, this routine is called.
288
fdefdbca
MCC
289 .. note::
290
291 This routine need only be called for page cache pages
1da177e4
LT
292 which can potentially ever be mapped into the address
293 space of a user process. So for example, VFS layer code
294 handling vfs symlinks in the page cache need not call
295 this interface at all.
296
297 The phrase "kernel writes to a page cache page" means,
298 specifically, that the kernel executes store instructions
299 that dirty data in that page at the page->virtual mapping
300 of that page. It is important to flush here to handle
301 D-cache aliasing, to make sure these kernel stores are
302 visible to user space mappings of that page.
303
304 The corollary case is just as important, if there are users
305 which have shared+writable mappings of this file, we must make
306 sure that kernel reads of these pages will see the most recent
307 stores done by the user.
308
309 If D-cache aliasing is not an issue, this routine may
310 simply be defined as a nop on that architecture.
311
312 There is a bit set aside in page->flags (PG_arch_1) as
313 "architecture private". The kernel guarantees that,
314 for pagecache pages, it will clear this bit when such
315 a page first enters the pagecache.
316
317 This allows these interfaces to be implemented much more
318 efficiently. It allows one to "defer" (perhaps indefinitely)
319 the actual flush if there are currently no user processes
320 mapping this page. See sparc64's flush_dcache_page and
321 update_mmu_cache implementations for an example of how to go
322 about doing this.
323
324 The idea is, first at flush_dcache_page() time, if
27ba0644
KS
325 page->mapping->i_mmap is an empty tree, just mark the architecture
326 private page flag bit. Later, in update_mmu_cache(), a check is
327 made of this flag bit, and if set the flush is done and the flag
328 bit is cleared.
1da177e4 329
fdefdbca
MCC
330 .. important::
331
332 It is often important, if you defer the flush,
1da177e4
LT
333 that the actual flush occurs on the same CPU
334 as did the cpu stores into the page to make it
335 dirty. Again, see sparc64 for examples of how
336 to deal with this.
337
fdefdbca
MCC
338 ``void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
339 unsigned long user_vaddr, void *dst, void *src, int len)``
340 ``void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
341 unsigned long user_vaddr, void *dst, void *src, int len)``
342
1da177e4
LT
343 When the kernel needs to copy arbitrary data in and out
344 of arbitrary user pages (f.e. for ptrace()) it will use
345 these two routines.
346
347 Any necessary cache flushing or other coherency operations
348 that need to occur should happen here. If the processor's
349 instruction cache does not snoop cpu stores, it is very
350 likely that you will need to flush the instruction cache
351 for copy_to_user_page().
352
fdefdbca
MCC
353 ``void flush_anon_page(struct vm_area_struct *vma, struct page *page,
354 unsigned long vmaddr)``
355
03beb076
JB
356 When the kernel needs to access the contents of an anonymous
357 page, it calls this function (currently only
358 get_user_pages()). Note: flush_dcache_page() deliberately
359 doesn't work for an anonymous page. The default
360 implementation is a nop (and should remain so for all coherent
361 architectures). For incoherent architectures, it should flush
a6f36be3 362 the cache of the page at vmaddr.
03beb076 363
fdefdbca
MCC
364 ``void flush_kernel_dcache_page(struct page *page)``
365
5a3a5a98
JB
366 When the kernel needs to modify a user page is has obtained
367 with kmap, it calls this function after all modifications are
368 complete (but before kunmapping it) to bring the underlying
369 page up to date. It is assumed here that the user has no
370 incoherent cached copies (i.e. the original page was obtained
371 from a mechanism like get_user_pages()). The default
372 implementation is a nop and should remain so on all coherent
373 architectures. On incoherent architectures, this should flush
374 the kernel cache for page (using page_address(page)).
375
376
fdefdbca
MCC
377 ``void flush_icache_range(unsigned long start, unsigned long end)``
378
1da177e4
LT
379 When the kernel stores into addresses that it will execute
380 out of (eg when loading modules), this function is called.
381
382 If the icache does not snoop stores then this routine will need
383 to flush it.
384
fdefdbca
MCC
385 ``void flush_icache_page(struct vm_area_struct *vma, struct page *page)``
386
1da177e4 387 All the functionality of flush_icache_page can be implemented in
548a1950
PG
388 flush_dcache_page and update_mmu_cache. In the future, the hope
389 is to remove this interface completely.
9df5f741
JB
390
391The final category of APIs is for I/O to deliberately aliased address
392ranges inside the kernel. Such aliases are set up by use of the
393vmap/vmalloc API. Since kernel I/O goes via physical pages, the I/O
394subsystem assumes that the user mapping and kernel offset mapping are
395the only aliases. This isn't true for vmap aliases, so anything in
396the kernel trying to do I/O to vmap areas must manually manage
397coherency. It must do this by flushing the vmap range before doing
398I/O and invalidating it after the I/O returns.
399
fdefdbca
MCC
400 ``void flush_kernel_vmap_range(void *vaddr, int size)``
401
9df5f741
JB
402 flushes the kernel cache for a given virtual address range in
403 the vmap area. This is to make sure that any data the kernel
404 modified in the vmap range is made visible to the physical
405 page. The design is to make this area safe to perform I/O on.
406 Note that this API does *not* also flush the offset map alias
407 of the area.
408
fdefdbca
MCC
409 ``void invalidate_kernel_vmap_range(void *vaddr, int size) invalidates``
410
9df5f741
JB
411 the cache for a given virtual address range in the vmap area
412 which prevents the processor from making the cache stale by
413 speculatively reading data while the I/O was occurring to the
414 physical pages. This is only necessary for data reads into the
415 vmap area.