Revert "FROMLIST: android: binder: Move buffer out of area shared with user space"
[GitHub/LineageOS/android_kernel_samsung_universal7580.git] / Documentation / vm / unevictable-lru.txt
CommitLineData
c24b7201
DH
1 ==============================
2 UNEVICTABLE LRU INFRASTRUCTURE
3 ==============================
4
5========
6CONTENTS
7========
8
9 (*) The Unevictable LRU
10
11 - The unevictable page list.
12 - Memory control group interaction.
13 - Marking address spaces unevictable.
14 - Detecting Unevictable Pages.
15 - vmscan's handling of unevictable pages.
16
17 (*) mlock()'d pages.
18
19 - History.
20 - Basic management.
21 - mlock()/mlockall() system call handling.
22 - Filtering special vmas.
23 - munlock()/munlockall() system call handling.
24 - Migrating mlocked pages.
25 - mmap(MAP_LOCKED) system call handling.
26 - munmap()/exit()/exec() system call handling.
27 - try_to_unmap().
28 - try_to_munlock() reverse map scan.
29 - Page reclaim in shrink_*_list().
30
31
32============
33INTRODUCTION
34============
35
36This document describes the Linux memory manager's "Unevictable LRU"
37infrastructure and the use of this to manage several types of "unevictable"
38pages.
39
40The document attempts to provide the overall rationale behind this mechanism
41and the rationale for some of the design decisions that drove the
42implementation. The latter design rationale is discussed in the context of an
43implementation description. Admittedly, one can obtain the implementation
44details - the "what does it do?" - by reading the code. One hopes that the
45descriptions below add value by provide the answer to "why does it do that?".
46
47
48===================
49THE UNEVICTABLE LRU
50===================
51
52The Unevictable LRU facility adds an additional LRU list to track unevictable
53pages and to hide these pages from vmscan. This mechanism is based on a patch
54by Larry Woodman of Red Hat to address several scalability problems with page
fa07e787 55reclaim in Linux. The problems have been observed at customer sites on large
c24b7201
DH
56memory x86_64 systems.
57
58To illustrate this with an example, a non-NUMA x86_64 platform with 128GB of
59main memory will have over 32 million 4k pages in a single zone. When a large
60fraction of these pages are not evictable for any reason [see below], vmscan
61will spend a lot of time scanning the LRU lists looking for the small fraction
62of pages that are evictable. This can result in a situation where all CPUs are
63spending 100% of their time in vmscan for hours or days on end, with the system
64completely unresponsive.
65
66The unevictable list addresses the following classes of unevictable pages:
67
68 (*) Those owned by ramfs.
69
70 (*) Those mapped into SHM_LOCK'd shared memory regions.
71
72 (*) Those mapped into VM_LOCKED [mlock()ed] VMAs.
73
74The infrastructure may also be able to handle other conditions that make pages
fa07e787
LS
75unevictable, either by definition or by circumstance, in the future.
76
77
c24b7201
DH
78THE UNEVICTABLE PAGE LIST
79-------------------------
fa07e787
LS
80
81The Unevictable LRU infrastructure consists of an additional, per-zone, LRU list
82called the "unevictable" list and an associated page flag, PG_unevictable, to
c24b7201
DH
83indicate that the page is being managed on the unevictable list.
84
85The PG_unevictable flag is analogous to, and mutually exclusive with, the
86PG_active flag in that it indicates on which LRU list a page resides when
e6e8dd50 87PG_lru is set.
fa07e787
LS
88
89The Unevictable LRU infrastructure maintains unevictable pages on an additional
90LRU list for a few reasons:
91
c24b7201
DH
92 (1) We get to "treat unevictable pages just like we treat other pages in the
93 system - which means we get to use the same code to manipulate them, the
94 same code to isolate them (for migrate, etc.), the same code to keep track
95 of the statistics, etc..." [Rik van Riel]
96
97 (2) We want to be able to migrate unevictable pages between nodes for memory
98 defragmentation, workload management and memory hotplug. The linux kernel
99 can only migrate pages that it can successfully isolate from the LRU
100 lists. If we were to maintain pages elsewhere than on an LRU-like list,
101 where they can be found by isolate_lru_page(), we would prevent their
102 migration, unless we reworked migration code to find the unevictable pages
103 itself.
fa07e787 104
fa07e787 105
c24b7201
DH
106The unevictable list does not differentiate between file-backed and anonymous,
107swap-backed pages. This differentiation is only important while the pages are,
108in fact, evictable.
fa07e787 109
c24b7201
DH
110The unevictable list benefits from the "arrayification" of the per-zone LRU
111lists and statistics originally proposed and posted by Christoph Lameter.
fa07e787 112
c24b7201
DH
113The unevictable list does not use the LRU pagevec mechanism. Rather,
114unevictable pages are placed directly on the page's zone's unevictable list
115under the zone lru_lock. This allows us to prevent the stranding of pages on
116the unevictable list when one task has the page isolated from the LRU and other
117tasks are changing the "evictability" state of the page.
fa07e787 118
fa07e787 119
c24b7201
DH
120MEMORY CONTROL GROUP INTERACTION
121--------------------------------
fa07e787 122
c24b7201
DH
123The unevictable LRU facility interacts with the memory control group [aka
124memory controller; see Documentation/cgroups/memory.txt] by extending the
125lru_list enum.
126
127The memory controller data structure automatically gets a per-zone unevictable
128list as a result of the "arrayification" of the per-zone LRU lists (one per
129lru_list enum element). The memory controller tracks the movement of pages to
130and from the unevictable list.
fa07e787 131
fa07e787
LS
132When a memory control group comes under memory pressure, the controller will
133not attempt to reclaim pages on the unevictable list. This has a couple of
c24b7201
DH
134effects:
135
136 (1) Because the pages are "hidden" from reclaim on the unevictable list, the
137 reclaim process can be more efficient, dealing only with pages that have a
138 chance of being reclaimed.
139
140 (2) On the other hand, if too many of the pages charged to the control group
141 are unevictable, the evictable portion of the working set of the tasks in
142 the control group may not fit into the available memory. This can cause
143 the control group to thrash or to OOM-kill tasks.
144
145
146MARKING ADDRESS SPACES UNEVICTABLE
147----------------------------------
148
149For facilities such as ramfs none of the pages attached to the address space
150may be evicted. To prevent eviction of any such pages, the AS_UNEVICTABLE
151address space flag is provided, and this can be manipulated by a filesystem
152using a number of wrapper functions:
153
154 (*) void mapping_set_unevictable(struct address_space *mapping);
155
156 Mark the address space as being completely unevictable.
157
158 (*) void mapping_clear_unevictable(struct address_space *mapping);
159
160 Mark the address space as being evictable.
161
162 (*) int mapping_unevictable(struct address_space *mapping);
163
164 Query the address space, and return true if it is completely
165 unevictable.
166
167These are currently used in two places in the kernel:
168
169 (1) By ramfs to mark the address spaces of its inodes when they are created,
170 and this mark remains for the life of the inode.
171
172 (2) By SYSV SHM to mark SHM_LOCK'd address spaces until SHM_UNLOCK is called.
173
174 Note that SHM_LOCK is not required to page in the locked pages if they're
175 swapped out; the application must touch the pages manually if it wants to
176 ensure they're in memory.
177
178
179DETECTING UNEVICTABLE PAGES
180---------------------------
181
182The function page_evictable() in vmscan.c determines whether a page is
183evictable or not using the query function outlined above [see section "Marking
184address spaces unevictable"] to check the AS_UNEVICTABLE flag.
185
186For address spaces that are so marked after being populated (as SHM regions
187might be), the lock action (eg: SHM_LOCK) can be lazy, and need not populate
188the page tables for the region as does, for example, mlock(), nor need it make
189any special effort to push any pages in the SHM_LOCK'd area to the unevictable
190list. Instead, vmscan will do this if and when it encounters the pages during
191a reclamation scan.
192
193On an unlock action (such as SHM_UNLOCK), the unlocker (eg: shmctl()) must scan
194the pages in the region and "rescue" them from the unevictable list if no other
195condition is keeping them unevictable. If an unevictable region is destroyed,
196the pages are also "rescued" from the unevictable list in the process of
197freeing them.
198
199page_evictable() also checks for mlocked pages by testing an additional page
39b5f29a
HD
200flag, PG_mlocked (as wrapped by PageMlocked()), which is set when a page is
201faulted into a VM_LOCKED vma, or found in a vma being VM_LOCKED.
fa07e787
LS
202
203
c24b7201
DH
204VMSCAN'S HANDLING OF UNEVICTABLE PAGES
205--------------------------------------
fa07e787
LS
206
207If unevictable pages are culled in the fault path, or moved to the unevictable
c24b7201
DH
208list at mlock() or mmap() time, vmscan will not encounter the pages until they
209have become evictable again (via munlock() for example) and have been "rescued"
210from the unevictable list. However, there may be situations where we decide,
211for the sake of expediency, to leave a unevictable page on one of the regular
212active/inactive LRU lists for vmscan to deal with. vmscan checks for such
213pages in all of the shrink_{active|inactive|page}_list() functions and will
214"cull" such pages that it encounters: that is, it diverts those pages to the
215unevictable list for the zone being scanned.
216
217There may be situations where a page is mapped into a VM_LOCKED VMA, but the
218page is not marked as PG_mlocked. Such pages will make it all the way to
fa07e787 219shrink_page_list() where they will be detected when vmscan walks the reverse
c24b7201
DH
220map in try_to_unmap(). If try_to_unmap() returns SWAP_MLOCK,
221shrink_page_list() will cull the page at that point.
fa07e787 222
c24b7201
DH
223To "cull" an unevictable page, vmscan simply puts the page back on the LRU list
224using putback_lru_page() - the inverse operation to isolate_lru_page() - after
225dropping the page lock. Because the condition which makes the page unevictable
226may change once the page is unlocked, putback_lru_page() will recheck the
227unevictable state of a page that it places on the unevictable list. If the
228page has become unevictable, putback_lru_page() removes it from the list and
229retries, including the page_unevictable() test. Because such a race is a rare
230event and movement of pages onto the unevictable list should be rare, these
231extra evictabilty checks should not occur in the majority of calls to
232putback_lru_page().
fa07e787
LS
233
234
c24b7201
DH
235=============
236MLOCKED PAGES
237=============
fa07e787 238
c24b7201
DH
239The unevictable page list is also useful for mlock(), in addition to ramfs and
240SYSV SHM. Note that mlock() is only available in CONFIG_MMU=y situations; in
241NOMMU situations, all mappings are effectively mlocked.
242
243
244HISTORY
245-------
246
247The "Unevictable mlocked Pages" infrastructure is based on work originally
fa07e787 248posted by Nick Piggin in an RFC patch entitled "mm: mlocked pages off LRU".
c24b7201
DH
249Nick posted his patch as an alternative to a patch posted by Christoph Lameter
250to achieve the same objective: hiding mlocked pages from vmscan.
251
252In Nick's patch, he used one of the struct page LRU list link fields as a count
253of VM_LOCKED VMAs that map the page. This use of the link field for a count
254prevented the management of the pages on an LRU list, and thus mlocked pages
255were not migratable as isolate_lru_page() could not find them, and the LRU list
256link field was not available to the migration subsystem.
257
258Nick resolved this by putting mlocked pages back on the lru list before
259attempting to isolate them, thus abandoning the count of VM_LOCKED VMAs. When
260Nick's patch was integrated with the Unevictable LRU work, the count was
261replaced by walking the reverse map to determine whether any VM_LOCKED VMAs
262mapped the page. More on this below.
263
264
265BASIC MANAGEMENT
266----------------
267
268mlocked pages - pages mapped into a VM_LOCKED VMA - are a class of unevictable
269pages. When such a page has been "noticed" by the memory management subsystem,
270the page is marked with the PG_mlocked flag. This can be manipulated using the
271PageMlocked() functions.
272
273A PG_mlocked page will be placed on the unevictable list when it is added to
274the LRU. Such pages can be "noticed" by memory management in several places:
275
276 (1) in the mlock()/mlockall() system call handlers;
277
278 (2) in the mmap() system call handler when mmapping a region with the
279 MAP_LOCKED flag;
280
281 (3) mmapping a region in a task that has called mlockall() with the MCL_FUTURE
282 flag
283
284 (4) in the fault path, if mlocked pages are "culled" in the fault path,
285 and when a VM_LOCKED stack segment is expanded; or
286
287 (5) as mentioned above, in vmscan:shrink_page_list() when attempting to
288 reclaim a page in a VM_LOCKED VMA via try_to_unmap()
289
290all of which result in the VM_LOCKED flag being set for the VMA if it doesn't
291already have it set.
292
293mlocked pages become unlocked and rescued from the unevictable list when:
294
295 (1) mapped in a range unlocked via the munlock()/munlockall() system calls;
296
297 (2) munmap()'d out of the last VM_LOCKED VMA that maps the page, including
298 unmapping at task exit;
299
300 (3) when the page is truncated from the last VM_LOCKED VMA of an mmapped file;
301 or
302
303 (4) before a page is COW'd in a VM_LOCKED VMA.
304
305
306mlock()/mlockall() SYSTEM CALL HANDLING
307---------------------------------------
fa07e787
LS
308
309Both [do_]mlock() and [do_]mlockall() system call handlers call mlock_fixup()
c24b7201 310for each VMA in the range specified by the call. In the case of mlockall(),
fa07e787 311this is the entire active address space of the task. Note that mlock_fixup()
c24b7201
DH
312is used for both mlocking and munlocking a range of memory. A call to mlock()
313an already VM_LOCKED VMA, or to munlock() a VMA that is not VM_LOCKED is
314treated as a no-op, and mlock_fixup() simply returns.
315
316If the VMA passes some filtering as described in "Filtering Special Vmas"
317below, mlock_fixup() will attempt to merge the VMA with its neighbors or split
318off a subset of the VMA if the range does not cover the entire VMA. Once the
319VMA has been merged or split or neither, mlock_fixup() will call
320__mlock_vma_pages_range() to fault in the pages via get_user_pages() and to
321mark the pages as mlocked via mlock_vma_page().
322
323Note that the VMA being mlocked might be mapped with PROT_NONE. In this case,
324get_user_pages() will be unable to fault in the pages. That's okay. If pages
325do end up getting faulted into this VM_LOCKED VMA, we'll handle them in the
fa07e787
LS
326fault path or in vmscan.
327
328Also note that a page returned by get_user_pages() could be truncated or
c24b7201
DH
329migrated out from under us, while we're trying to mlock it. To detect this,
330__mlock_vma_pages_range() checks page_mapping() after acquiring the page lock.
331If the page is still associated with its mapping, we'll go ahead and call
332mlock_vma_page(). If the mapping is gone, we just unlock the page and move on.
333In the worst case, this will result in a page mapped in a VM_LOCKED VMA
334remaining on a normal LRU list without being PageMlocked(). Again, vmscan will
335detect and cull such pages.
336
337mlock_vma_page() will call TestSetPageMlocked() for each page returned by
338get_user_pages(). We use TestSetPageMlocked() because the page might already
339be mlocked by another task/VMA and we don't want to do extra work. We
340especially do not want to count an mlocked page more than once in the
341statistics. If the page was already mlocked, mlock_vma_page() need do nothing
342more.
fa07e787
LS
343
344If the page was NOT already mlocked, mlock_vma_page() attempts to isolate the
345page from the LRU, as it is likely on the appropriate active or inactive list
c24b7201
DH
346at that time. If the isolate_lru_page() succeeds, mlock_vma_page() will put
347back the page - by calling putback_lru_page() - which will notice that the page
348is now mlocked and divert the page to the zone's unevictable list. If
fa07e787 349mlock_vma_page() is unable to isolate the page from the LRU, vmscan will handle
c24b7201 350it later if and when it attempts to reclaim the page.
fa07e787
LS
351
352
c24b7201
DH
353FILTERING SPECIAL VMAS
354----------------------
fa07e787 355
c24b7201 356mlock_fixup() filters several classes of "special" VMAs:
fa07e787 357
c24b7201 3581) VMAs with VM_IO or VM_PFNMAP set are skipped entirely. The pages behind
fa07e787 359 these mappings are inherently pinned, so we don't need to mark them as
c24b7201
DH
360 mlocked. In any case, most of the pages have no struct page in which to so
361 mark the page. Because of this, get_user_pages() will fail for these VMAs,
362 so there is no sense in attempting to visit them.
363
3642) VMAs mapping hugetlbfs page are already effectively pinned into memory. We
365 neither need nor want to mlock() these pages. However, to preserve the
366 prior behavior of mlock() - before the unevictable/mlock changes -
367 mlock_fixup() will call make_pages_present() in the hugetlbfs VMA range to
368 allocate the huge pages and populate the ptes.
369
314e51b9
KK
3703) VMAs with VM_DONTEXPAND are generally userspace mappings of kernel pages,
371 such as the VDSO page, relay channel pages, etc. These pages
fa07e787 372 are inherently unevictable and are not managed on the LRU lists.
c24b7201 373 mlock_fixup() treats these VMAs the same as hugetlbfs VMAs. It calls
fa07e787
LS
374 make_pages_present() to populate the ptes.
375
c24b7201 376Note that for all of these special VMAs, mlock_fixup() does not set the
fa07e787 377VM_LOCKED flag. Therefore, we won't have to deal with them later during
c24b7201
DH
378munlock(), munmap() or task exit. Neither does mlock_fixup() account these
379VMAs against the task's "locked_vm".
380
381
382munlock()/munlockall() SYSTEM CALL HANDLING
383-------------------------------------------
384
385The munlock() and munlockall() system calls are handled by the same functions -
386do_mlock[all]() - as the mlock() and mlockall() system calls with the unlock vs
387lock operation indicated by an argument. So, these system calls are also
388handled by mlock_fixup(). Again, if called for an already munlocked VMA,
389mlock_fixup() simply returns. Because of the VMA filtering discussed above,
390VM_LOCKED will not be set in any "special" VMAs. So, these VMAs will be
fa07e787
LS
391ignored for munlock.
392
c24b7201
DH
393If the VMA is VM_LOCKED, mlock_fixup() again attempts to merge or split off the
394specified range. The range is then munlocked via the function
395__mlock_vma_pages_range() - the same function used to mlock a VMA range -
fa07e787
LS
396passing a flag to indicate that munlock() is being performed.
397
c24b7201 398Because the VMA access protections could have been changed to PROT_NONE after
63d6c5ad 399faulting in and mlocking pages, get_user_pages() was unreliable for visiting
c24b7201 400these pages for munlocking. Because we don't want to leave pages mlocked,
fa07e787 401get_user_pages() was enhanced to accept a flag to ignore the permissions when
c24b7201
DH
402fetching the pages - all of which should be resident as a result of previous
403mlocking.
fa07e787
LS
404
405For munlock(), __mlock_vma_pages_range() unlocks individual pages by calling
406munlock_vma_page(). munlock_vma_page() unconditionally clears the PG_mlocked
c24b7201
DH
407flag using TestClearPageMlocked(). As with mlock_vma_page(),
408munlock_vma_page() use the Test*PageMlocked() function to handle the case where
409the page might have already been unlocked by another task. If the page was
410mlocked, munlock_vma_page() updates that zone statistics for the number of
411mlocked pages. Note, however, that at this point we haven't checked whether
412the page is mapped by other VM_LOCKED VMAs.
413
414We can't call try_to_munlock(), the function that walks the reverse map to
415check for other VM_LOCKED VMAs, without first isolating the page from the LRU.
fa07e787 416try_to_munlock() is a variant of try_to_unmap() and thus requires that the page
c24b7201
DH
417not be on an LRU list [more on these below]. However, the call to
418isolate_lru_page() could fail, in which case we couldn't try_to_munlock(). So,
419we go ahead and clear PG_mlocked up front, as this might be the only chance we
420have. If we can successfully isolate the page, we go ahead and
fa07e787 421try_to_munlock(), which will restore the PG_mlocked flag and update the zone
c24b7201 422page statistics if it finds another VMA holding the page mlocked. If we fail
fa07e787 423to isolate the page, we'll have left a potentially mlocked page on the LRU.
c24b7201
DH
424This is fine, because we'll catch it later if and if vmscan tries to reclaim
425the page. This should be relatively rare.
426
427
428MIGRATING MLOCKED PAGES
429-----------------------
430
431A page that is being migrated has been isolated from the LRU lists and is held
432locked across unmapping of the page, updating the page's address space entry
433and copying the contents and state, until the page table entry has been
434replaced with an entry that refers to the new page. Linux supports migration
435of mlocked pages and other unevictable pages. This involves simply moving the
436PG_mlocked and PG_unevictable states from the old page to the new page.
437
438Note that page migration can race with mlocking or munlocking of the same page.
439This has been discussed from the mlock/munlock perspective in the respective
440sections above. Both processes (migration and m[un]locking) hold the page
441locked. This provides the first level of synchronization. Page migration
442zeros out the page_mapping of the old page before unlocking it, so m[un]lock
443can skip these pages by testing the page mapping under page lock.
444
445To complete page migration, we place the new and old pages back onto the LRU
446after dropping the page lock. The "unneeded" page - old page on success, new
447page on failure - will be freed when the reference count held by the migration
448process is released. To ensure that we don't strand pages on the unevictable
449list because of a race between munlock and migration, page migration uses the
450putback_lru_page() function to add migrated pages back to the LRU.
451
452
453mmap(MAP_LOCKED) SYSTEM CALL HANDLING
454-------------------------------------
fa07e787
LS
455
456In addition the the mlock()/mlockall() system calls, an application can request
c24b7201 457that a region of memory be mlocked supplying the MAP_LOCKED flag to the mmap()
fa07e787
LS
458call. Furthermore, any mmap() call or brk() call that expands the heap by a
459task that has previously called mlockall() with the MCL_FUTURE flag will result
c24b7201
DH
460in the newly mapped memory being mlocked. Before the unevictable/mlock
461changes, the kernel simply called make_pages_present() to allocate pages and
462populate the page table.
fa07e787
LS
463
464To mlock a range of memory under the unevictable/mlock infrastructure, the
465mmap() handler and task address space expansion functions call
466mlock_vma_pages_range() specifying the vma and the address range to mlock.
c24b7201
DH
467mlock_vma_pages_range() filters VMAs like mlock_fixup(), as described above in
468"Filtering Special VMAs". It will clear the VM_LOCKED flag, which will have
469already been set by the caller, in filtered VMAs. Thus these VMA's need not be
470visited for munlock when the region is unmapped.
fa07e787 471
c24b7201 472For "normal" VMAs, mlock_vma_pages_range() calls __mlock_vma_pages_range() to
fa07e787
LS
473fault/allocate the pages and mlock them. Again, like mlock_fixup(),
474mlock_vma_pages_range() downgrades the mmap semaphore to read mode before
c24b7201 475attempting to fault/allocate and mlock the pages and "upgrades" the semaphore
fa07e787
LS
476back to write mode before returning.
477
c24b7201
DH
478The callers of mlock_vma_pages_range() will have already added the memory range
479to be mlocked to the task's "locked_vm". To account for filtered VMAs,
fa07e787 480mlock_vma_pages_range() returns the number of pages NOT mlocked. All of the
c24b7201
DH
481callers then subtract a non-negative return value from the task's locked_vm. A
482negative return value represent an error - for example, from get_user_pages()
483attempting to fault in a VMA with PROT_NONE access. In this case, we leave the
484memory range accounted as locked_vm, as the protections could be changed later
485and pages allocated into that region.
fa07e787
LS
486
487
c24b7201
DH
488munmap()/exit()/exec() SYSTEM CALL HANDLING
489-------------------------------------------
fa07e787
LS
490
491When unmapping an mlocked region of memory, whether by an explicit call to
492munmap() or via an internal unmap from exit() or exec() processing, we must
c24b7201 493munlock the pages if we're removing the last VM_LOCKED VMA that maps the pages.
63d6c5ad
HD
494Before the unevictable/mlock changes, mlocking did not mark the pages in any
495way, so unmapping them required no processing.
fa07e787
LS
496
497To munlock a range of memory under the unevictable/mlock infrastructure, the
c24b7201 498munmap() handler and task address space call tear down function
fa07e787 499munlock_vma_pages_all(). The name reflects the observation that one always
c24b7201
DH
500specifies the entire VMA range when munlock()ing during unmap of a region.
501Because of the VMA filtering when mlocking() regions, only "normal" VMAs that
fa07e787
LS
502actually contain mlocked pages will be passed to munlock_vma_pages_all().
503
c24b7201 504munlock_vma_pages_all() clears the VM_LOCKED VMA flag and, like mlock_fixup()
fa07e787 505for the munlock case, calls __munlock_vma_pages_range() to walk the page table
c24b7201
DH
506for the VMA's memory range and munlock_vma_page() each resident page mapped by
507the VMA. This effectively munlocks the page, only if this is the last
508VM_LOCKED VMA that maps the page.
fa07e787 509
fa07e787 510
c24b7201
DH
511try_to_unmap()
512--------------
fa07e787 513
c24b7201 514Pages can, of course, be mapped into multiple VMAs. Some of these VMAs may
fa07e787 515have VM_LOCKED flag set. It is possible for a page mapped into one or more
c24b7201
DH
516VM_LOCKED VMAs not to have the PG_mlocked flag set and therefore reside on one
517of the active or inactive LRU lists. This could happen if, for example, a task
518in the process of munlocking the page could not isolate the page from the LRU.
519As a result, vmscan/shrink_page_list() might encounter such a page as described
520in section "vmscan's handling of unevictable pages". To handle this situation,
521try_to_unmap() checks for VM_LOCKED VMAs while it is walking a page's reverse
522map.
fa07e787
LS
523
524try_to_unmap() is always called, by either vmscan for reclaim or for page
c24b7201
DH
525migration, with the argument page locked and isolated from the LRU. Separate
526functions handle anonymous and mapped file pages, as these types of pages have
527different reverse map mechanisms.
528
529 (*) try_to_unmap_anon()
530
531 To unmap anonymous pages, each VMA in the list anchored in the anon_vma
532 must be visited - at least until a VM_LOCKED VMA is encountered. If the
533 page is being unmapped for migration, VM_LOCKED VMAs do not stop the
534 process because mlocked pages are migratable. However, for reclaim, if
535 the page is mapped into a VM_LOCKED VMA, the scan stops.
536
3cd0b625 537 try_to_unmap_anon() attempts to acquire in read mode the mmap semaphore of
c24b7201
DH
538 the mm_struct to which the VMA belongs. If this is successful, it will
539 mlock the page via mlock_vma_page() - we wouldn't have gotten to
540 try_to_unmap_anon() if the page were already mlocked - and will return
541 SWAP_MLOCK, indicating that the page is unevictable.
542
543 If the mmap semaphore cannot be acquired, we are not sure whether the page
544 is really unevictable or not. In this case, try_to_unmap_anon() will
545 return SWAP_AGAIN.
546
547 (*) try_to_unmap_file() - linear mappings
548
549 Unmapping of a mapped file page works the same as for anonymous mappings,
550 except that the scan visits all VMAs that map the page's index/page offset
551 in the page's mapping's reverse map priority search tree. It also visits
552 each VMA in the page's mapping's non-linear list, if the list is
553 non-empty.
554
555 As for anonymous pages, on encountering a VM_LOCKED VMA for a mapped file
556 page, try_to_unmap_file() will attempt to acquire the associated
557 mm_struct's mmap semaphore to mlock the page, returning SWAP_MLOCK if this
558 is successful, and SWAP_AGAIN, if not.
559
560 (*) try_to_unmap_file() - non-linear mappings
561
562 If a page's mapping contains a non-empty non-linear mapping VMA list, then
563 try_to_un{map|lock}() must also visit each VMA in that list to determine
564 whether the page is mapped in a VM_LOCKED VMA. Again, the scan must visit
565 all VMAs in the non-linear list to ensure that the pages is not/should not
566 be mlocked.
567
568 If a VM_LOCKED VMA is found in the list, the scan could terminate.
569 However, there is no easy way to determine whether the page is actually
570 mapped in a given VMA - either for unmapping or testing whether the
571 VM_LOCKED VMA actually pins the page.
572
573 try_to_unmap_file() handles non-linear mappings by scanning a certain
574 number of pages - a "cluster" - in each non-linear VMA associated with the
575 page's mapping, for each file mapped page that vmscan tries to unmap. If
576 this happens to unmap the page we're trying to unmap, try_to_unmap() will
577 notice this on return (page_mapcount(page) will be 0) and return
578 SWAP_SUCCESS. Otherwise, it will return SWAP_AGAIN, causing vmscan to
579 recirculate this page. We take advantage of the cluster scan in
580 try_to_unmap_cluster() as follows:
581
582 For each non-linear VMA, try_to_unmap_cluster() attempts to acquire the
583 mmap semaphore of the associated mm_struct for read without blocking.
584
585 If this attempt is successful and the VMA is VM_LOCKED,
586 try_to_unmap_cluster() will retain the mmap semaphore for the scan;
587 otherwise it drops it here.
588
589 Then, for each page in the cluster, if we're holding the mmap semaphore
590 for a locked VMA, try_to_unmap_cluster() calls mlock_vma_page() to
591 mlock the page. This call is a no-op if the page is already locked,
592 but will mlock any pages in the non-linear mapping that happen to be
593 unlocked.
594
595 If one of the pages so mlocked is the page passed in to try_to_unmap(),
596 try_to_unmap_cluster() will return SWAP_MLOCK, rather than the default
597 SWAP_AGAIN. This will allow vmscan to cull the page, rather than
598 recirculating it on the inactive list.
599
600 Again, if try_to_unmap_cluster() cannot acquire the VMA's mmap sem, it
601 returns SWAP_AGAIN, indicating that the page is mapped by a VM_LOCKED
602 VMA, but couldn't be mlocked.
603
604
605try_to_munlock() REVERSE MAP SCAN
606---------------------------------
607
608 [!] TODO/FIXME: a better name might be page_mlocked() - analogous to the
609 page_referenced() reverse map walker.
610
611When munlock_vma_page() [see section "munlock()/munlockall() System Call
612Handling" above] tries to munlock a page, it needs to determine whether or not
613the page is mapped by any VM_LOCKED VMA without actually attempting to unmap
614all PTEs from the page. For this purpose, the unevictable/mlock infrastructure
615introduced a variant of try_to_unmap() called try_to_munlock().
fa07e787
LS
616
617try_to_munlock() calls the same functions as try_to_unmap() for anonymous and
40e47125 618mapped file pages with an additional argument specifying unlock versus unmap
fa07e787 619processing. Again, these functions walk the respective reverse maps looking
c24b7201 620for VM_LOCKED VMAs. When such a VMA is found for anonymous pages and file
fa07e787 621pages mapped in linear VMAs, as in the try_to_unmap() case, the functions
3cd0b625 622attempt to acquire the associated mmap semaphore, mlock the page via
fa07e787 623mlock_vma_page() and return SWAP_MLOCK. This effectively undoes the
63d6c5ad 624pre-clearing of the page's PG_mlocked done by munlock_vma_page.
fa07e787 625
c24b7201
DH
626If try_to_unmap() is unable to acquire a VM_LOCKED VMA's associated mmap
627semaphore, it will return SWAP_AGAIN. This will allow shrink_page_list() to
628recycle the page on the inactive list and hope that it has better luck with the
629page next time.
630
631For file pages mapped into non-linear VMAs, the try_to_munlock() logic works
632slightly differently. On encountering a VM_LOCKED non-linear VMA that might
633map the page, try_to_munlock() returns SWAP_AGAIN without actually mlocking the
634page. munlock_vma_page() will just leave the page unlocked and let vmscan deal
635with it - the usual fallback position.
636
637Note that try_to_munlock()'s reverse map walk must visit every VMA in a page's
638reverse map to determine that a page is NOT mapped into any VM_LOCKED VMA.
639However, the scan can terminate when it encounters a VM_LOCKED VMA and can
3cd0b625 640successfully acquire the VMA's mmap semaphore for read and mlock the page.
c24b7201
DH
641Although try_to_munlock() might be called a great many times when munlocking a
642large region or tearing down a large address space that has been mlocked via
643mlockall(), overall this is a fairly rare event.
644
645
646PAGE RECLAIM IN shrink_*_list()
647-------------------------------
648
649shrink_active_list() culls any obviously unevictable pages - i.e.
39b5f29a 650!page_evictable(page) - diverting these to the unevictable list.
c24b7201
DH
651However, shrink_active_list() only sees unevictable pages that made it onto the
652active/inactive lru lists. Note that these pages do not have PageUnevictable
653set - otherwise they would be on the unevictable list and shrink_active_list
654would never see them.
fa07e787
LS
655
656Some examples of these unevictable pages on the LRU lists are:
657
c24b7201
DH
658 (1) ramfs pages that have been placed on the LRU lists when first allocated.
659
660 (2) SHM_LOCK'd shared memory pages. shmctl(SHM_LOCK) does not attempt to
661 allocate or fault in the pages in the shared memory region. This happens
662 when an application accesses the page the first time after SHM_LOCK'ing
663 the segment.
fa07e787 664
c24b7201
DH
665 (3) mlocked pages that could not be isolated from the LRU and moved to the
666 unevictable list in mlock_vma_page().
fa07e787 667
c24b7201
DH
668 (4) Pages mapped into multiple VM_LOCKED VMAs, but try_to_munlock() couldn't
669 acquire the VMA's mmap semaphore to test the flags and set PageMlocked.
670 munlock_vma_page() was forced to let the page back on to the normal LRU
671 list for vmscan to handle.
fa07e787 672
c24b7201
DH
673shrink_inactive_list() also diverts any unevictable pages that it finds on the
674inactive lists to the appropriate zone's unevictable list.
fa07e787 675
c24b7201
DH
676shrink_inactive_list() should only see SHM_LOCK'd pages that became SHM_LOCK'd
677after shrink_active_list() had moved them to the inactive list, or pages mapped
678into VM_LOCKED VMAs that munlock_vma_page() couldn't isolate from the LRU to
679recheck via try_to_munlock(). shrink_inactive_list() won't notice the latter,
680but will pass on to shrink_page_list().
fa07e787
LS
681
682shrink_page_list() again culls obviously unevictable pages that it could
63d6c5ad 683encounter for similar reason to shrink_inactive_list(). Pages mapped into
c24b7201 684VM_LOCKED VMAs but without PG_mlocked set will make it all the way to
63d6c5ad
HD
685try_to_unmap(). shrink_page_list() will divert them to the unevictable list
686when try_to_unmap() returns SWAP_MLOCK, as discussed above.