Commit | Line | Data |
---|---|---|
1da177e4 LT |
1 | Documentation for /proc/sys/vm/* kernel version 2.2.10 |
2 | (c) 1998, 1999, Rik van Riel <riel@nl.linux.org> | |
3 | ||
4 | For general info and legal blurb, please look in README. | |
5 | ||
6 | ============================================================== | |
7 | ||
8 | This file contains the documentation for the sysctl files in | |
9 | /proc/sys/vm and is valid for Linux kernel version 2.2. | |
10 | ||
11 | The files in this directory can be used to tune the operation | |
12 | of the virtual memory (VM) subsystem of the Linux kernel and | |
13 | the writeout of dirty data to disk. | |
14 | ||
15 | Default values and initialization routines for most of these | |
16 | files can be found in mm/swap.c. | |
17 | ||
18 | Currently, these files are in /proc/sys/vm: | |
19 | - overcommit_memory | |
20 | - page-cluster | |
21 | - dirty_ratio | |
22 | - dirty_background_ratio | |
23 | - dirty_expire_centisecs | |
24 | - dirty_writeback_centisecs | |
195cf453 | 25 | - highmem_is_dirtyable (only if CONFIG_HIGHMEM set) |
1da177e4 LT |
26 | - max_map_count |
27 | - min_free_kbytes | |
28 | - laptop_mode | |
29 | - block_dump | |
9d0243bc | 30 | - drop-caches |
1743660b | 31 | - zone_reclaim_mode |
9614634f | 32 | - min_unmapped_ratio |
0ff38490 | 33 | - min_slab_ratio |
fadd8fbd | 34 | - panic_on_oom |
fef1bdd6 | 35 | - oom_dump_tasks |
fe071d7e | 36 | - oom_kill_allocating_task |
ed032189 | 37 | - mmap_min_address |
f0c0b2b8 | 38 | - numa_zonelist_order |
d5dbac87 NA |
39 | - nr_hugepages |
40 | - nr_overcommit_hugepages | |
1da177e4 LT |
41 | |
42 | ============================================================== | |
43 | ||
2da02997 DR |
44 | dirty_bytes, dirty_ratio, dirty_background_bytes, |
45 | dirty_background_ratio, dirty_expire_centisecs, | |
195cf453 BG |
46 | dirty_writeback_centisecs, highmem_is_dirtyable, |
47 | vfs_cache_pressure, laptop_mode, block_dump, swap_token_timeout, | |
48 | drop-caches, hugepages_treat_as_movable: | |
1da177e4 LT |
49 | |
50 | See Documentation/filesystems/proc.txt | |
51 | ||
52 | ============================================================== | |
53 | ||
54 | overcommit_memory: | |
55 | ||
56 | This value contains a flag that enables memory overcommitment. | |
57 | ||
58 | When this flag is 0, the kernel attempts to estimate the amount | |
59 | of free memory left when userspace requests more memory. | |
60 | ||
61 | When this flag is 1, the kernel pretends there is always enough | |
62 | memory until it actually runs out. | |
63 | ||
64 | When this flag is 2, the kernel uses a "never overcommit" | |
65 | policy that attempts to prevent any overcommit of memory. | |
66 | ||
67 | This feature can be very useful because there are a lot of | |
68 | programs that malloc() huge amounts of memory "just-in-case" | |
69 | and don't use much of it. | |
70 | ||
71 | The default value is 0. | |
72 | ||
73 | See Documentation/vm/overcommit-accounting and | |
74 | security/commoncap.c::cap_vm_enough_memory() for more information. | |
75 | ||
76 | ============================================================== | |
77 | ||
78 | overcommit_ratio: | |
79 | ||
80 | When overcommit_memory is set to 2, the committed address | |
81 | space is not permitted to exceed swap plus this percentage | |
82 | of physical RAM. See above. | |
83 | ||
84 | ============================================================== | |
85 | ||
86 | page-cluster: | |
87 | ||
88 | The Linux VM subsystem avoids excessive disk seeks by reading | |
89 | multiple pages on a page fault. The number of pages it reads | |
90 | is dependent on the amount of memory in your machine. | |
91 | ||
92 | The number of pages the kernel reads in at once is equal to | |
93 | 2 ^ page-cluster. Values above 2 ^ 5 don't make much sense | |
94 | for swap because we only cluster swap data in 32-page groups. | |
95 | ||
96 | ============================================================== | |
97 | ||
98 | max_map_count: | |
99 | ||
100 | This file contains the maximum number of memory map areas a process | |
101 | may have. Memory map areas are used as a side-effect of calling | |
102 | malloc, directly by mmap and mprotect, and also when loading shared | |
103 | libraries. | |
104 | ||
105 | While most applications need less than a thousand maps, certain | |
106 | programs, particularly malloc debuggers, may consume lots of them, | |
107 | e.g., up to one or two maps per allocation. | |
108 | ||
109 | The default value is 65536. | |
110 | ||
111 | ============================================================== | |
112 | ||
113 | min_free_kbytes: | |
114 | ||
115 | This is used to force the Linux VM to keep a minimum number | |
116 | of kilobytes free. The VM uses this number to compute a pages_min | |
117 | value for each lowmem zone in the system. Each lowmem zone gets | |
118 | a number of reserved free pages based proportionally on its size. | |
8ad4b1fb | 119 | |
d9195881 | 120 | Some minimal amount of memory is needed to satisfy PF_MEMALLOC |
24950898 PM |
121 | allocations; if you set this to lower than 1024KB, your system will |
122 | become subtly broken, and prone to deadlock under high loads. | |
123 | ||
124 | Setting this too high will OOM your machine instantly. | |
125 | ||
8ad4b1fb RS |
126 | ============================================================== |
127 | ||
128 | percpu_pagelist_fraction | |
129 | ||
130 | This is the fraction of pages at most (high mark pcp->high) in each zone that | |
131 | are allocated for each per cpu page list. The min value for this is 8. It | |
132 | means that we don't allow more than 1/8th of pages in each zone to be | |
133 | allocated in any single per_cpu_pagelist. This entry only changes the value | |
134 | of hot per cpu pagelists. User can specify a number like 100 to allocate | |
135 | 1/100th of each zone to each per cpu page list. | |
136 | ||
137 | The batch value of each per cpu pagelist is also updated as a result. It is | |
138 | set to pcp->high/4. The upper limit of batch is (PAGE_SHIFT * 8) | |
139 | ||
140 | The initial value is zero. Kernel does not use this value at boot time to set | |
141 | the high water marks for each per cpu page list. | |
1743660b CL |
142 | |
143 | =============================================================== | |
144 | ||
145 | zone_reclaim_mode: | |
146 | ||
5d3f083d | 147 | Zone_reclaim_mode allows someone to set more or less aggressive approaches to |
1b2ffb78 CL |
148 | reclaim memory when a zone runs out of memory. If it is set to zero then no |
149 | zone reclaim occurs. Allocations will be satisfied from other zones / nodes | |
150 | in the system. | |
151 | ||
152 | This is value ORed together of | |
153 | ||
154 | 1 = Zone reclaim on | |
155 | 2 = Zone reclaim writes dirty pages out | |
156 | 4 = Zone reclaim swaps pages | |
157 | ||
158 | zone_reclaim_mode is set during bootup to 1 if it is determined that pages | |
159 | from remote zones will cause a measurable performance reduction. The | |
1743660b | 160 | page allocator will then reclaim easily reusable pages (those page |
1b2ffb78 CL |
161 | cache pages that are currently not used) before allocating off node pages. |
162 | ||
163 | It may be beneficial to switch off zone reclaim if the system is | |
164 | used for a file server and all of memory should be used for caching files | |
165 | from disk. In that case the caching effect is more important than | |
166 | data locality. | |
167 | ||
168 | Allowing zone reclaim to write out pages stops processes that are | |
169 | writing large amounts of data from dirtying pages on other nodes. Zone | |
170 | reclaim will write out dirty pages if a zone fills up and so effectively | |
171 | throttle the process. This may decrease the performance of a single process | |
172 | since it cannot use all of system memory to buffer the outgoing writes | |
173 | anymore but it preserve the memory on other nodes so that the performance | |
174 | of other processes running on other nodes will not be affected. | |
1743660b | 175 | |
1b2ffb78 CL |
176 | Allowing regular swap effectively restricts allocations to the local |
177 | node unless explicitly overridden by memory policies or cpuset | |
178 | configurations. | |
1743660b | 179 | |
fadd8fbd KH |
180 | ============================================================= |
181 | ||
9614634f CL |
182 | min_unmapped_ratio: |
183 | ||
184 | This is available only on NUMA kernels. | |
185 | ||
0ff38490 | 186 | A percentage of the total pages in each zone. Zone reclaim will only |
9614634f CL |
187 | occur if more than this percentage of pages are file backed and unmapped. |
188 | This is to insure that a minimal amount of local pages is still available for | |
189 | file I/O even if the node is overallocated. | |
190 | ||
191 | The default is 1 percent. | |
192 | ||
193 | ============================================================= | |
194 | ||
0ff38490 CL |
195 | min_slab_ratio: |
196 | ||
197 | This is available only on NUMA kernels. | |
198 | ||
199 | A percentage of the total pages in each zone. On Zone reclaim | |
200 | (fallback from the local zone occurs) slabs will be reclaimed if more | |
201 | than this percentage of pages in a zone are reclaimable slab pages. | |
202 | This insures that the slab growth stays under control even in NUMA | |
203 | systems that rarely perform global reclaim. | |
204 | ||
205 | The default is 5 percent. | |
206 | ||
207 | Note that slab reclaim is triggered in a per zone / node fashion. | |
208 | The process of reclaiming slab memory is currently not node specific | |
209 | and may not be fast. | |
210 | ||
211 | ============================================================= | |
212 | ||
fadd8fbd KH |
213 | panic_on_oom |
214 | ||
2b744c01 | 215 | This enables or disables panic on out-of-memory feature. |
fadd8fbd | 216 | |
2b744c01 YG |
217 | If this is set to 0, the kernel will kill some rogue process, |
218 | called oom_killer. Usually, oom_killer can kill rogue processes and | |
219 | system will survive. | |
220 | ||
221 | If this is set to 1, the kernel panics when out-of-memory happens. | |
222 | However, if a process limits using nodes by mempolicy/cpusets, | |
223 | and those nodes become memory exhaustion status, one process | |
224 | may be killed by oom-killer. No panic occurs in this case. | |
225 | Because other nodes' memory may be free. This means system total status | |
226 | may be not fatal yet. | |
fadd8fbd | 227 | |
2b744c01 YG |
228 | If this is set to 2, the kernel panics compulsorily even on the |
229 | above-mentioned. | |
230 | ||
231 | The default value is 0. | |
232 | 1 and 2 are for failover of clustering. Please select either | |
233 | according to your policy of failover. | |
ed032189 | 234 | |
fe071d7e DR |
235 | ============================================================= |
236 | ||
fef1bdd6 DR |
237 | oom_dump_tasks |
238 | ||
239 | Enables a system-wide task dump (excluding kernel threads) to be | |
240 | produced when the kernel performs an OOM-killing and includes such | |
241 | information as pid, uid, tgid, vm size, rss, cpu, oom_adj score, and | |
242 | name. This is helpful to determine why the OOM killer was invoked | |
243 | and to identify the rogue task that caused it. | |
244 | ||
245 | If this is set to zero, this information is suppressed. On very | |
246 | large systems with thousands of tasks it may not be feasible to dump | |
247 | the memory state information for each one. Such systems should not | |
248 | be forced to incur a performance penalty in OOM conditions when the | |
249 | information may not be desired. | |
250 | ||
251 | If this is set to non-zero, this information is shown whenever the | |
252 | OOM killer actually kills a memory-hogging task. | |
253 | ||
254 | The default value is 0. | |
255 | ||
256 | ============================================================= | |
257 | ||
fe071d7e DR |
258 | oom_kill_allocating_task |
259 | ||
260 | This enables or disables killing the OOM-triggering task in | |
261 | out-of-memory situations. | |
262 | ||
263 | If this is set to zero, the OOM killer will scan through the entire | |
264 | tasklist and select a task based on heuristics to kill. This normally | |
265 | selects a rogue memory-hogging task that frees up a large amount of | |
266 | memory when killed. | |
267 | ||
268 | If this is set to non-zero, the OOM killer simply kills the task that | |
269 | triggered the out-of-memory condition. This avoids the expensive | |
270 | tasklist scan. | |
271 | ||
272 | If panic_on_oom is selected, it takes precedence over whatever value | |
273 | is used in oom_kill_allocating_task. | |
274 | ||
275 | The default value is 0. | |
276 | ||
ed032189 EP |
277 | ============================================================== |
278 | ||
279 | mmap_min_addr | |
280 | ||
281 | This file indicates the amount of address space which a user process will | |
282 | be restricted from mmaping. Since kernel null dereference bugs could | |
283 | accidentally operate based on the information in the first couple of pages | |
284 | of memory userspace processes should not be allowed to write to them. By | |
285 | default this value is set to 0 and no protections will be enforced by the | |
286 | security module. Setting this value to something like 64k will allow the | |
287 | vast majority of applications to work correctly and provide defense in depth | |
288 | against future potential kernel bugs. | |
289 | ||
f0c0b2b8 KH |
290 | ============================================================== |
291 | ||
292 | numa_zonelist_order | |
293 | ||
294 | This sysctl is only for NUMA. | |
295 | 'where the memory is allocated from' is controlled by zonelists. | |
296 | (This documentation ignores ZONE_HIGHMEM/ZONE_DMA32 for simple explanation. | |
297 | you may be able to read ZONE_DMA as ZONE_DMA32...) | |
298 | ||
299 | In non-NUMA case, a zonelist for GFP_KERNEL is ordered as following. | |
300 | ZONE_NORMAL -> ZONE_DMA | |
301 | This means that a memory allocation request for GFP_KERNEL will | |
302 | get memory from ZONE_DMA only when ZONE_NORMAL is not available. | |
303 | ||
304 | In NUMA case, you can think of following 2 types of order. | |
305 | Assume 2 node NUMA and below is zonelist of Node(0)'s GFP_KERNEL | |
306 | ||
307 | (A) Node(0) ZONE_NORMAL -> Node(0) ZONE_DMA -> Node(1) ZONE_NORMAL | |
308 | (B) Node(0) ZONE_NORMAL -> Node(1) ZONE_NORMAL -> Node(0) ZONE_DMA. | |
309 | ||
310 | Type(A) offers the best locality for processes on Node(0), but ZONE_DMA | |
311 | will be used before ZONE_NORMAL exhaustion. This increases possibility of | |
312 | out-of-memory(OOM) of ZONE_DMA because ZONE_DMA is tend to be small. | |
313 | ||
314 | Type(B) cannot offer the best locality but is more robust against OOM of | |
315 | the DMA zone. | |
316 | ||
317 | Type(A) is called as "Node" order. Type (B) is "Zone" order. | |
318 | ||
319 | "Node order" orders the zonelists by node, then by zone within each node. | |
320 | Specify "[Nn]ode" for zone order | |
321 | ||
322 | "Zone Order" orders the zonelists by zone type, then by node within each | |
323 | zone. Specify "[Zz]one"for zode order. | |
324 | ||
325 | Specify "[Dd]efault" to request automatic configuration. Autoconfiguration | |
326 | will select "node" order in following case. | |
327 | (1) if the DMA zone does not exist or | |
328 | (2) if the DMA zone comprises greater than 50% of the available memory or | |
329 | (3) if any node's DMA zone comprises greater than 60% of its local memory and | |
330 | the amount of local memory is big enough. | |
331 | ||
332 | Otherwise, "zone" order will be selected. Default order is recommended unless | |
333 | this is causing problems for your system/application. | |
d5dbac87 NA |
334 | |
335 | ============================================================== | |
336 | ||
337 | nr_hugepages | |
338 | ||
339 | Change the minimum size of the hugepage pool. | |
340 | ||
341 | See Documentation/vm/hugetlbpage.txt | |
342 | ||
343 | ============================================================== | |
344 | ||
345 | nr_overcommit_hugepages | |
346 | ||
347 | Change the maximum size of the hugepage pool. The maximum is | |
348 | nr_hugepages + nr_overcommit_hugepages. | |
349 | ||
350 | See Documentation/vm/hugetlbpage.txt |