Mel Gorman [Fri, 20 May 2016 00:13:21 +0000 (17:13 -0700)]
mm, page_alloc: use new PageAnonHead helper in the free page fast path
The PageAnon check always checks for compound_head but this is a
relatively expensive check if the caller already knows the page is a
head page. This patch creates a helper and uses it in the page free
path which only operates on head pages.
With this patch and "Only check PageCompound for high-order pages", the
performance difference on a page allocator microbenchmark is;
4.6.0-rc2 4.6.0-rc2
vanilla nocompound-v1r20
Min alloc-odr0-1 425.00 ( 0.00%) 417.00 ( 1.88%)
Min alloc-odr0-2 313.00 ( 0.00%) 308.00 ( 1.60%)
Min alloc-odr0-4 257.00 ( 0.00%) 253.00 ( 1.56%)
Min alloc-odr0-8 224.00 ( 0.00%) 221.00 ( 1.34%)
Min alloc-odr0-16 208.00 ( 0.00%) 205.00 ( 1.44%)
Min alloc-odr0-32 199.00 ( 0.00%) 199.00 ( 0.00%)
Min alloc-odr0-64 195.00 ( 0.00%) 193.00 ( 1.03%)
Min alloc-odr0-128 192.00 ( 0.00%) 191.00 ( 0.52%)
Min alloc-odr0-256 204.00 ( 0.00%) 200.00 ( 1.96%)
Min alloc-odr0-512 213.00 ( 0.00%) 212.00 ( 0.47%)
Min alloc-odr0-1024 219.00 ( 0.00%) 219.00 ( 0.00%)
Min alloc-odr0-2048 225.00 ( 0.00%) 225.00 ( 0.00%)
Min alloc-odr0-4096 230.00 ( 0.00%) 231.00 ( -0.43%)
Min alloc-odr0-8192 235.00 ( 0.00%) 234.00 ( 0.43%)
Min alloc-odr0-16384 235.00 ( 0.00%) 234.00 ( 0.43%)
Min free-odr0-1 215.00 ( 0.00%) 191.00 ( 11.16%)
Min free-odr0-2 152.00 ( 0.00%) 136.00 ( 10.53%)
Min free-odr0-4 119.00 ( 0.00%) 107.00 ( 10.08%)
Min free-odr0-8 106.00 ( 0.00%) 96.00 ( 9.43%)
Min free-odr0-16 97.00 ( 0.00%) 87.00 ( 10.31%)
Min free-odr0-32 91.00 ( 0.00%) 83.00 ( 8.79%)
Min free-odr0-64 89.00 ( 0.00%) 81.00 ( 8.99%)
Min free-odr0-128 88.00 ( 0.00%) 80.00 ( 9.09%)
Min free-odr0-256 106.00 ( 0.00%) 95.00 ( 10.38%)
Min free-odr0-512 116.00 ( 0.00%) 111.00 ( 4.31%)
Min free-odr0-1024 125.00 ( 0.00%) 118.00 ( 5.60%)
Min free-odr0-2048 133.00 ( 0.00%) 126.00 ( 5.26%)
Min free-odr0-4096 136.00 ( 0.00%) 130.00 ( 4.41%)
Min free-odr0-8192 138.00 ( 0.00%) 130.00 ( 5.80%)
Min free-odr0-16384 137.00 ( 0.00%) 130.00 ( 5.11%)
There is a sizable boost to the free allocator performance. While there
is an apparent boost on the allocation side, it's likely a co-incidence
or due to the patches slightly reducing cache footprint.
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Mel Gorman [Fri, 20 May 2016 00:13:18 +0000 (17:13 -0700)]
mm, page_alloc: only check PageCompound for high-order pages
Another year, another round of page allocator optimisations focusing
this time on the alloc and free fast paths. This should be of help to
workloads that are allocator-intensive from kernel space where the cost
of zeroing is not nceessraily incurred.
The series is motivated by the observation that page alloc
microbenchmarks on multiple machines regressed between 3.12.44 and 4.4.
Second, there is discussions before LSF/MM considering the possibility
of adding another page allocator which is potentially hazardous but a
patch series improving performance is better than whining.
After the series is applied, there are still hazards. In the free
paths, the debugging checking and page zone/pageblock lookups dominate
but there was not an obvious solution to that. In the alloc path, the
major contributers are dealing with zonelists, new page preperation, the
fair zone allocation and numerous statistic updates. The fair zone
allocator is removed by the per-node LRU series if that gets merged so
it's nor a major concern at the moment.
On normal userspace benchmarks, there is little impact as the zeroing
cost is significant but it's visible
aim9
4.6.0-rc3 4.6.0-rc3
vanilla deferalloc-v3
Min page_test 828693.33 ( 0.00%) 887060.00 ( 7.04%)
Min brk_test
4847266.67 ( 0.00%)
4966266.67 ( 2.45%)
Min exec_test 1271.00 ( 0.00%) 1275.67 ( 0.37%)
Min fork_test 12371.75 ( 0.00%) 12380.00 ( 0.07%)
The overall impact on a page allocator microbenchmark for a range of orders
and number of pages allocated in a batch is
4.6.0-rc3 4.6.0-rc3
vanilla deferalloc-v3r7
Min alloc-odr0-1 428.00 ( 0.00%) 316.00 ( 26.17%)
Min alloc-odr0-2 314.00 ( 0.00%) 231.00 ( 26.43%)
Min alloc-odr0-4 256.00 ( 0.00%) 192.00 ( 25.00%)
Min alloc-odr0-8 222.00 ( 0.00%) 166.00 ( 25.23%)
Min alloc-odr0-16 207.00 ( 0.00%) 154.00 ( 25.60%)
Min alloc-odr0-32 197.00 ( 0.00%) 148.00 ( 24.87%)
Min alloc-odr0-64 193.00 ( 0.00%) 144.00 ( 25.39%)
Min alloc-odr0-128 191.00 ( 0.00%) 143.00 ( 25.13%)
Min alloc-odr0-256 203.00 ( 0.00%) 153.00 ( 24.63%)
Min alloc-odr0-512 212.00 ( 0.00%) 165.00 ( 22.17%)
Min alloc-odr0-1024 221.00 ( 0.00%) 172.00 ( 22.17%)
Min alloc-odr0-2048 225.00 ( 0.00%) 179.00 ( 20.44%)
Min alloc-odr0-4096 232.00 ( 0.00%) 185.00 ( 20.26%)
Min alloc-odr0-8192 235.00 ( 0.00%) 187.00 ( 20.43%)
Min alloc-odr0-16384 236.00 ( 0.00%) 188.00 ( 20.34%)
Min alloc-odr1-1 519.00 ( 0.00%) 450.00 ( 13.29%)
Min alloc-odr1-2 391.00 ( 0.00%) 336.00 ( 14.07%)
Min alloc-odr1-4 313.00 ( 0.00%) 268.00 ( 14.38%)
Min alloc-odr1-8 277.00 ( 0.00%) 235.00 ( 15.16%)
Min alloc-odr1-16 256.00 ( 0.00%) 218.00 ( 14.84%)
Min alloc-odr1-32 252.00 ( 0.00%) 212.00 ( 15.87%)
Min alloc-odr1-64 244.00 ( 0.00%) 206.00 ( 15.57%)
Min alloc-odr1-128 244.00 ( 0.00%) 207.00 ( 15.16%)
Min alloc-odr1-256 243.00 ( 0.00%) 207.00 ( 14.81%)
Min alloc-odr1-512 245.00 ( 0.00%) 209.00 ( 14.69%)
Min alloc-odr1-1024 248.00 ( 0.00%) 214.00 ( 13.71%)
Min alloc-odr1-2048 253.00 ( 0.00%) 220.00 ( 13.04%)
Min alloc-odr1-4096 258.00 ( 0.00%) 224.00 ( 13.18%)
Min alloc-odr1-8192 261.00 ( 0.00%) 229.00 ( 12.26%)
Min alloc-odr2-1 560.00 ( 0.00%) 753.00 (-34.46%)
Min alloc-odr2-2 424.00 ( 0.00%) 351.00 ( 17.22%)
Min alloc-odr2-4 339.00 ( 0.00%) 393.00 (-15.93%)
Min alloc-odr2-8 298.00 ( 0.00%) 246.00 ( 17.45%)
Min alloc-odr2-16 276.00 ( 0.00%) 227.00 ( 17.75%)
Min alloc-odr2-32 271.00 ( 0.00%) 221.00 ( 18.45%)
Min alloc-odr2-64 264.00 ( 0.00%) 217.00 ( 17.80%)
Min alloc-odr2-128 264.00 ( 0.00%) 217.00 ( 17.80%)
Min alloc-odr2-256 264.00 ( 0.00%) 218.00 ( 17.42%)
Min alloc-odr2-512 269.00 ( 0.00%) 223.00 ( 17.10%)
Min alloc-odr2-1024 279.00 ( 0.00%) 230.00 ( 17.56%)
Min alloc-odr2-2048 283.00 ( 0.00%) 235.00 ( 16.96%)
Min alloc-odr2-4096 285.00 ( 0.00%) 239.00 ( 16.14%)
Min alloc-odr3-1 629.00 ( 0.00%) 505.00 ( 19.71%)
Min alloc-odr3-2 472.00 ( 0.00%) 374.00 ( 20.76%)
Min alloc-odr3-4 383.00 ( 0.00%) 301.00 ( 21.41%)
Min alloc-odr3-8 341.00 ( 0.00%) 266.00 ( 21.99%)
Min alloc-odr3-16 316.00 ( 0.00%) 248.00 ( 21.52%)
Min alloc-odr3-32 308.00 ( 0.00%) 241.00 ( 21.75%)
Min alloc-odr3-64 305.00 ( 0.00%) 241.00 ( 20.98%)
Min alloc-odr3-128 308.00 ( 0.00%) 244.00 ( 20.78%)
Min alloc-odr3-256 317.00 ( 0.00%) 249.00 ( 21.45%)
Min alloc-odr3-512 327.00 ( 0.00%) 256.00 ( 21.71%)
Min alloc-odr3-1024 331.00 ( 0.00%) 261.00 ( 21.15%)
Min alloc-odr3-2048 333.00 ( 0.00%) 266.00 ( 20.12%)
Min alloc-odr4-1 767.00 ( 0.00%) 572.00 ( 25.42%)
Min alloc-odr4-2 578.00 ( 0.00%) 429.00 ( 25.78%)
Min alloc-odr4-4 474.00 ( 0.00%) 346.00 ( 27.00%)
Min alloc-odr4-8 422.00 ( 0.00%) 310.00 ( 26.54%)
Min alloc-odr4-16 399.00 ( 0.00%) 295.00 ( 26.07%)
Min alloc-odr4-32 392.00 ( 0.00%) 293.00 ( 25.26%)
Min alloc-odr4-64 394.00 ( 0.00%) 293.00 ( 25.63%)
Min alloc-odr4-128 405.00 ( 0.00%) 305.00 ( 24.69%)
Min alloc-odr4-256 417.00 ( 0.00%) 319.00 ( 23.50%)
Min alloc-odr4-512 425.00 ( 0.00%) 326.00 ( 23.29%)
Min alloc-odr4-1024 426.00 ( 0.00%) 329.00 ( 22.77%)
Min free-odr0-1 216.00 ( 0.00%) 178.00 ( 17.59%)
Min free-odr0-2 152.00 ( 0.00%) 125.00 ( 17.76%)
Min free-odr0-4 120.00 ( 0.00%) 99.00 ( 17.50%)
Min free-odr0-8 106.00 ( 0.00%) 85.00 ( 19.81%)
Min free-odr0-16 97.00 ( 0.00%) 80.00 ( 17.53%)
Min free-odr0-32 92.00 ( 0.00%) 76.00 ( 17.39%)
Min free-odr0-64 89.00 ( 0.00%) 74.00 ( 16.85%)
Min free-odr0-128 89.00 ( 0.00%) 73.00 ( 17.98%)
Min free-odr0-256 107.00 ( 0.00%) 90.00 ( 15.89%)
Min free-odr0-512 117.00 ( 0.00%) 108.00 ( 7.69%)
Min free-odr0-1024 125.00 ( 0.00%) 118.00 ( 5.60%)
Min free-odr0-2048 132.00 ( 0.00%) 125.00 ( 5.30%)
Min free-odr0-4096 135.00 ( 0.00%) 130.00 ( 3.70%)
Min free-odr0-8192 137.00 ( 0.00%) 130.00 ( 5.11%)
Min free-odr0-16384 137.00 ( 0.00%) 131.00 ( 4.38%)
Min free-odr1-1 318.00 ( 0.00%) 289.00 ( 9.12%)
Min free-odr1-2 228.00 ( 0.00%) 207.00 ( 9.21%)
Min free-odr1-4 182.00 ( 0.00%) 165.00 ( 9.34%)
Min free-odr1-8 163.00 ( 0.00%) 146.00 ( 10.43%)
Min free-odr1-16 151.00 ( 0.00%) 135.00 ( 10.60%)
Min free-odr1-32 146.00 ( 0.00%) 129.00 ( 11.64%)
Min free-odr1-64 145.00 ( 0.00%) 130.00 ( 10.34%)
Min free-odr1-128 148.00 ( 0.00%) 134.00 ( 9.46%)
Min free-odr1-256 148.00 ( 0.00%) 137.00 ( 7.43%)
Min free-odr1-512 151.00 ( 0.00%) 140.00 ( 7.28%)
Min free-odr1-1024 154.00 ( 0.00%) 143.00 ( 7.14%)
Min free-odr1-2048 156.00 ( 0.00%) 144.00 ( 7.69%)
Min free-odr1-4096 156.00 ( 0.00%) 142.00 ( 8.97%)
Min free-odr1-8192 156.00 ( 0.00%) 140.00 ( 10.26%)
Min free-odr2-1 361.00 ( 0.00%) 457.00 (-26.59%)
Min free-odr2-2 258.00 ( 0.00%) 224.00 ( 13.18%)
Min free-odr2-4 208.00 ( 0.00%) 223.00 ( -7.21%)
Min free-odr2-8 185.00 ( 0.00%) 160.00 ( 13.51%)
Min free-odr2-16 173.00 ( 0.00%) 149.00 ( 13.87%)
Min free-odr2-32 166.00 ( 0.00%) 145.00 ( 12.65%)
Min free-odr2-64 166.00 ( 0.00%) 146.00 ( 12.05%)
Min free-odr2-128 169.00 ( 0.00%) 148.00 ( 12.43%)
Min free-odr2-256 170.00 ( 0.00%) 152.00 ( 10.59%)
Min free-odr2-512 177.00 ( 0.00%) 156.00 ( 11.86%)
Min free-odr2-1024 182.00 ( 0.00%) 162.00 ( 10.99%)
Min free-odr2-2048 181.00 ( 0.00%) 160.00 ( 11.60%)
Min free-odr2-4096 180.00 ( 0.00%) 159.00 ( 11.67%)
Min free-odr3-1 431.00 ( 0.00%) 367.00 ( 14.85%)
Min free-odr3-2 306.00 ( 0.00%) 259.00 ( 15.36%)
Min free-odr3-4 249.00 ( 0.00%) 208.00 ( 16.47%)
Min free-odr3-8 224.00 ( 0.00%) 186.00 ( 16.96%)
Min free-odr3-16 208.00 ( 0.00%) 176.00 ( 15.38%)
Min free-odr3-32 206.00 ( 0.00%) 174.00 ( 15.53%)
Min free-odr3-64 210.00 ( 0.00%) 178.00 ( 15.24%)
Min free-odr3-128 215.00 ( 0.00%) 182.00 ( 15.35%)
Min free-odr3-256 224.00 ( 0.00%) 189.00 ( 15.62%)
Min free-odr3-512 232.00 ( 0.00%) 195.00 ( 15.95%)
Min free-odr3-1024 230.00 ( 0.00%) 195.00 ( 15.22%)
Min free-odr3-2048 229.00 ( 0.00%) 193.00 ( 15.72%)
Min free-odr4-1 561.00 ( 0.00%) 439.00 ( 21.75%)
Min free-odr4-2 418.00 ( 0.00%) 318.00 ( 23.92%)
Min free-odr4-4 339.00 ( 0.00%) 269.00 ( 20.65%)
Min free-odr4-8 299.00 ( 0.00%) 239.00 ( 20.07%)
Min free-odr4-16 289.00 ( 0.00%) 234.00 ( 19.03%)
Min free-odr4-32 291.00 ( 0.00%) 235.00 ( 19.24%)
Min free-odr4-64 298.00 ( 0.00%) 238.00 ( 20.13%)
Min free-odr4-128 308.00 ( 0.00%) 251.00 ( 18.51%)
Min free-odr4-256 321.00 ( 0.00%) 267.00 ( 16.82%)
Min free-odr4-512 327.00 ( 0.00%) 269.00 ( 17.74%)
Min free-odr4-1024 326.00 ( 0.00%) 271.00 ( 16.87%)
Min total-odr0-1 644.00 ( 0.00%) 494.00 ( 23.29%)
Min total-odr0-2 466.00 ( 0.00%) 356.00 ( 23.61%)
Min total-odr0-4 376.00 ( 0.00%) 291.00 ( 22.61%)
Min total-odr0-8 328.00 ( 0.00%) 251.00 ( 23.48%)
Min total-odr0-16 304.00 ( 0.00%) 234.00 ( 23.03%)
Min total-odr0-32 289.00 ( 0.00%) 224.00 ( 22.49%)
Min total-odr0-64 282.00 ( 0.00%) 218.00 ( 22.70%)
Min total-odr0-128 280.00 ( 0.00%) 216.00 ( 22.86%)
Min total-odr0-256 310.00 ( 0.00%) 243.00 ( 21.61%)
Min total-odr0-512 329.00 ( 0.00%) 273.00 ( 17.02%)
Min total-odr0-1024 346.00 ( 0.00%) 290.00 ( 16.18%)
Min total-odr0-2048 357.00 ( 0.00%) 304.00 ( 14.85%)
Min total-odr0-4096 367.00 ( 0.00%) 315.00 ( 14.17%)
Min total-odr0-8192 372.00 ( 0.00%) 317.00 ( 14.78%)
Min total-odr0-16384 373.00 ( 0.00%) 319.00 ( 14.48%)
Min total-odr1-1 838.00 ( 0.00%) 739.00 ( 11.81%)
Min total-odr1-2 619.00 ( 0.00%) 543.00 ( 12.28%)
Min total-odr1-4 495.00 ( 0.00%) 433.00 ( 12.53%)
Min total-odr1-8 440.00 ( 0.00%) 382.00 ( 13.18%)
Min total-odr1-16 407.00 ( 0.00%) 353.00 ( 13.27%)
Min total-odr1-32 398.00 ( 0.00%) 341.00 ( 14.32%)
Min total-odr1-64 389.00 ( 0.00%) 336.00 ( 13.62%)
Min total-odr1-128 392.00 ( 0.00%) 341.00 ( 13.01%)
Min total-odr1-256 391.00 ( 0.00%) 344.00 ( 12.02%)
Min total-odr1-512 396.00 ( 0.00%) 349.00 ( 11.87%)
Min total-odr1-1024 402.00 ( 0.00%) 357.00 ( 11.19%)
Min total-odr1-2048 409.00 ( 0.00%) 364.00 ( 11.00%)
Min total-odr1-4096 414.00 ( 0.00%) 366.00 ( 11.59%)
Min total-odr1-8192 417.00 ( 0.00%) 369.00 ( 11.51%)
Min total-odr2-1 921.00 ( 0.00%) 1210.00 (-31.38%)
Min total-odr2-2 682.00 ( 0.00%) 576.00 ( 15.54%)
Min total-odr2-4 547.00 ( 0.00%) 616.00 (-12.61%)
Min total-odr2-8 483.00 ( 0.00%) 406.00 ( 15.94%)
Min total-odr2-16 449.00 ( 0.00%) 376.00 ( 16.26%)
Min total-odr2-32 437.00 ( 0.00%) 366.00 ( 16.25%)
Min total-odr2-64 431.00 ( 0.00%) 363.00 ( 15.78%)
Min total-odr2-128 433.00 ( 0.00%) 365.00 ( 15.70%)
Min total-odr2-256 434.00 ( 0.00%) 371.00 ( 14.52%)
Min total-odr2-512 446.00 ( 0.00%) 379.00 ( 15.02%)
Min total-odr2-1024 461.00 ( 0.00%) 392.00 ( 14.97%)
Min total-odr2-2048 464.00 ( 0.00%) 395.00 ( 14.87%)
Min total-odr2-4096 465.00 ( 0.00%) 398.00 ( 14.41%)
Min total-odr3-1 1060.00 ( 0.00%) 872.00 ( 17.74%)
Min total-odr3-2 778.00 ( 0.00%) 633.00 ( 18.64%)
Min total-odr3-4 632.00 ( 0.00%) 510.00 ( 19.30%)
Min total-odr3-8 565.00 ( 0.00%) 452.00 ( 20.00%)
Min total-odr3-16 524.00 ( 0.00%) 424.00 ( 19.08%)
Min total-odr3-32 514.00 ( 0.00%) 415.00 ( 19.26%)
Min total-odr3-64 515.00 ( 0.00%) 419.00 ( 18.64%)
Min total-odr3-128 523.00 ( 0.00%) 426.00 ( 18.55%)
Min total-odr3-256 541.00 ( 0.00%) 438.00 ( 19.04%)
Min total-odr3-512 559.00 ( 0.00%) 451.00 ( 19.32%)
Min total-odr3-1024 561.00 ( 0.00%) 456.00 ( 18.72%)
Min total-odr3-2048 562.00 ( 0.00%) 459.00 ( 18.33%)
Min total-odr4-1 1328.00 ( 0.00%) 1011.00 ( 23.87%)
Min total-odr4-2 997.00 ( 0.00%) 747.00 ( 25.08%)
Min total-odr4-4 813.00 ( 0.00%) 615.00 ( 24.35%)
Min total-odr4-8 721.00 ( 0.00%) 550.00 ( 23.72%)
Min total-odr4-16 689.00 ( 0.00%) 529.00 ( 23.22%)
Min total-odr4-32 683.00 ( 0.00%) 528.00 ( 22.69%)
Min total-odr4-64 692.00 ( 0.00%) 531.00 ( 23.27%)
Min total-odr4-128 713.00 ( 0.00%) 556.00 ( 22.02%)
Min total-odr4-256 738.00 ( 0.00%) 586.00 ( 20.60%)
Min total-odr4-512 753.00 ( 0.00%) 595.00 ( 20.98%)
Min total-odr4-1024 752.00 ( 0.00%) 600.00 ( 20.21%)
This patch (of 27):
order-0 pages by definition cannot be compound so avoid the check in the
fast path for those pages.
[akpm@linux-foundation.org: use unlikely(order) in free_pages_prepare(), per Vlastimil]
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Michal Hocko [Fri, 20 May 2016 00:13:15 +0000 (17:13 -0700)]
mm, oom_reaper: clear TIF_MEMDIE for all tasks queued for oom_reaper
Right now the oom reaper will clear TIF_MEMDIE only for tasks which were
successfully reaped. This is the safest option because we know that
such an oom victim would only block forward progress of the oom killer
without a good reason because it is highly unlikely it would release
much more memory. Basically most of its memory has been already torn
down.
We can relax this assumption to catch more corner cases though.
The first obvious one is when the oom victim clears its mm and gets
stuck later on. oom_reaper would back of on find_lock_task_mm returning
NULL. We can safely try to clear TIF_MEMDIE in this case because such a
task would be ignored by the oom killer anyway. The flag would be
cleared by that time already most of the time anyway.
The less obvious one is when the oom reaper fails due to mmap_sem
contention. Even if we clear TIF_MEMDIE for this task then it is not
very likely that we would select another task too easily because we
haven't reaped the last victim and so it would be still the #1
candidate. There is a rare race condition possible when the current
victim terminates before the next select_bad_process but considering
that oom_reap_task had retried several times before giving up then this
sounds like a borderline thing.
After this patch we should have a guarantee that the OOM killer will not
be block for unbounded amount of time for most cases.
Signed-off-by: Michal Hocko <mhocko@suse.com>
Cc: Raushaniya Maksudova <rmaksudova@parallels.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: Daniel Vetter <daniel.vetter@intel.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Michal Hocko [Fri, 20 May 2016 00:13:12 +0000 (17:13 -0700)]
oom, oom_reaper: try to reap tasks which skip regular OOM killer path
If either the current task is already killed or PF_EXITING or a selected
task is PF_EXITING then the oom killer is suppressed and so is the oom
reaper. This patch adds try_oom_reaper which checks the given task and
queues it for the oom reaper if that is safe to be done meaning that the
task doesn't share the mm with an alive process.
This might help to release the memory pressure while the task tries to
exit.
[akpm@linux-foundation.org: fix nommu build]
Signed-off-by: Michal Hocko <mhocko@suse.com>
Cc: Raushaniya Maksudova <rmaksudova@parallels.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: Daniel Vetter <daniel.vetter@intel.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Michal Hocko [Fri, 20 May 2016 00:13:09 +0000 (17:13 -0700)]
mm, oom: move GFP_NOFS check to out_of_memory
__alloc_pages_may_oom is the central place to decide when the
out_of_memory should be invoked. This is a good approach for most
checks there because they are page allocator specific and the allocation
fails right after for all of them.
The notable exception is GFP_NOFS context which is faking
did_some_progress and keep the page allocator looping even though there
couldn't have been any progress from the OOM killer. This patch doesn't
change this behavior because we are not ready to allow those allocation
requests to fail yet (and maybe we will face the reality that we will
never manage to safely fail these request). Instead __GFP_FS check is
moved down to out_of_memory and prevent from OOM victim selection there.
There are two reasons for that
- OOM notifiers might release some memory even from this context
as none of the registered notifier seems to be FS related
- this might help a dying thread to get an access to memory
reserves and move on which will make the behavior more
consistent with the case when the task gets killed from a
different context.
Keep a comment in __alloc_pages_may_oom to make sure we do not forget
how GFP_NOFS is special and that we really want to do something about
it.
Note to the current oom_notifier users:
The observable difference for you is that oom notifiers cannot depend on
any fs locks because we could deadlock. Not that this would be allowed
today because that would just lockup machine in most of the cases and
ruling out the OOM killer along the way. Another difference is that
callbacks might be invoked sooner now because GFP_NOFS is a weaker
reclaim context and so there could be reclaimable memory which is just
not reachable now. That would require GFP_NOFS only loads which are
really rare and more importantly the observable result would be dropping
of reconstructible object and potential performance drop which is not
such a big deal when we are struggling to fulfill other important
allocation requests.
Signed-off-by: Michal Hocko <mhocko@suse.com>
Cc: Raushaniya Maksudova <rmaksudova@parallels.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: Daniel Vetter <daniel.vetter@intel.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Vitaly Kuznetsov [Fri, 20 May 2016 00:13:06 +0000 (17:13 -0700)]
memory_hotplug: introduce memhp_default_state= command line parameter
CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE specifies the default value for the
memory hotplug onlining policy. Add a command line parameter to make it
possible to override the default. It may come handy for debug and
testing purposes.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Lennart Poettering <lennart@poettering.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Vitaly Kuznetsov [Fri, 20 May 2016 00:13:03 +0000 (17:13 -0700)]
memory_hotplug: introduce CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE
This patchset continues the work I started with commit
31bc3858ea3e
("memory-hotplug: add automatic onlining policy for the newly added
memory").
Initially I was going to stop there and bring the policy setting logic
to userspace. I met two issues on this way:
1) It is possible to have memory hotplugged at boot (e.g. with QEMU).
These blocks stay offlined if we turn the onlining policy on by
userspace.
2) My attempt to bring this policy setting to systemd failed, systemd
maintainers suggest to change the default in kernel or ... to use
tmpfiles.d to alter the policy (which looks like a hack to me):
https://github.com/systemd/systemd/pull/2938
Here I suggest to add a config option to set the default value for the
policy and a kernel command line parameter to make the override.
This patch (of 2):
Introduce config option to set the default value for memory hotplug
onlining policy (/sys/devices/system/memory/auto_online_blocks). The
reason one would want to turn this option on are to have early onlining
for hotpluggable memory available at boot and to not require any
userspace actions to make memory hotplug work.
[akpm@linux-foundation.org: tweak Kconfig text]
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Lennart Poettering <lennart@poettering.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Hugh Dickins [Fri, 20 May 2016 00:13:00 +0000 (17:13 -0700)]
arch: fix has_transparent_hugepage()
I've just discovered that the useful-sounding has_transparent_hugepage()
is actually an architecture-dependent minefield: on some arches it only
builds if CONFIG_TRANSPARENT_HUGEPAGE=y, on others it's also there when
not, but on some of those (arm and arm64) it then gives the wrong
answer; and on mips alone it's marked __init, which would crash if
called later (but so far it has not been called later).
Straighten this out: make it available to all configs, with a sensible
default in asm-generic/pgtable.h, removing its definitions from those
arches (arc, arm, arm64, sparc, tile) which are served by the default,
adding #define has_transparent_hugepage has_transparent_hugepage to
those (mips, powerpc, s390, x86) which need to override the default at
runtime, and removing the __init from mips (but maybe that kind of code
should be avoided after init: set a static variable the first time it's
called).
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Andres Lagar-Cavilla <andreslc@google.com>
Cc: Yang Shi <yang.shi@linaro.org>
Cc: Ning Qu <quning@gmail.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Vineet Gupta <vgupta@synopsys.com> [arch/arc]
Acked-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> [arch/s390]
Acked-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Hugh Dickins [Fri, 20 May 2016 00:12:57 +0000 (17:12 -0700)]
huge pagecache: extend mremap pmd rmap lockout to files
Whatever huge pagecache implementation we go with, file rmap locking
must be added to anon rmap locking, when mremap's move_page_tables()
finds a pmd_trans_huge pmd entry: a simple change, let's do it now.
Factor out take_rmap_locks() and drop_rmap_locks() to handle the locking
for make move_ptes() and move_page_tables(), and delete the
VM_BUG_ON_VMA which rejected vm_file and required anon_vma.
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Andres Lagar-Cavilla <andreslc@google.com>
Cc: Yang Shi <yang.shi@linaro.org>
Cc: Ning Qu <quning@gmail.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Andres Lagar-Cavilla <andreslc@google.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Hugh Dickins [Fri, 20 May 2016 00:12:54 +0000 (17:12 -0700)]
huge mm: move_huge_pmd does not need new_vma
Remove move_huge_pmd()'s redundant new_vma arg: all it was used for was
a VM_NOHUGEPAGE check on new_vma flags, but the new_vma is cloned from
the old vma, so a trans_huge_pmd in the new_vma will be as acceptable as
it was in the old vma, alignment and size permitting.
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Andres Lagar-Cavilla <andreslc@google.com>
Cc: Yang Shi <yang.shi@linaro.org>
Cc: Ning Qu <quning@gmail.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Andres Lagar-Cavilla <andreslc@google.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Hugh Dickins [Fri, 20 May 2016 00:12:50 +0000 (17:12 -0700)]
mm: /proc/sys/vm/stat_refresh to force vmstat update
Provide /proc/sys/vm/stat_refresh to force an immediate update of
per-cpu into global vmstats: useful to avoid a sleep(2) or whatever
before checking counts when testing. Originally added to work around a
bug which left counts stranded indefinitely on a cpu going idle (an
inaccuracy magnified when small below-batch numbers represent "huge"
amounts of memory), but I believe that bug is now fixed: nonetheless,
this is still a useful knob.
Its schedule_on_each_cpu() is probably too expensive just to fold into
reading /proc/meminfo itself: give this mode 0600 to prevent abuse.
Allow a write or a read to do the same: nothing to read, but "grep -h
Shmem /proc/sys/vm/stat_refresh /proc/meminfo" is convenient. Oh, and
since global_page_state() itself is careful to disguise any underflow as
0, hack in an "Invalid argument" and pr_warn() if a counter is negative
after the refresh - this helped to fix a misaccounting of
NR_ISOLATED_FILE in my migration code.
But on recent kernels, I find that NR_ALLOC_BATCH and NR_PAGES_SCANNED
often go negative some of the time. I have not yet worked out why, but
have no evidence that it's actually harmful. Punt for the moment by
just ignoring the anomaly on those.
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Andres Lagar-Cavilla <andreslc@google.com>
Cc: Yang Shi <yang.shi@linaro.org>
Cc: Ning Qu <quning@gmail.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Andres Lagar-Cavilla <andreslc@google.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Andres Lagar-Cavilla [Fri, 20 May 2016 00:12:47 +0000 (17:12 -0700)]
tmpfs: mem_cgroup charge fault to vm_mm not current mm
Although shmem_fault() has been careful to count a major fault to vm_mm,
shmem_getpage_gfp() has been careless in charging a remote access fault
to current->mm owner's memcg instead of to vma->vm_mm owner's memcg:
that is inconsistent with all the mem_cgroup charging on remote access
faults in mm/memory.c.
Fix it by passing fault_mm along with fault_type to
shmem_get_page_gfp(); but in that case, now knowing the right mm, it's
better for it to handle the PGMAJFAULT updates itself.
And let's keep this clutter out of most callers' way: change the common
shmem_getpage() wrapper to hide fault_mm and fault_type as well as gfp.
Signed-off-by: Andres Lagar-Cavilla <andreslc@google.com>
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Andres Lagar-Cavilla <andreslc@google.com>
Cc: Yang Shi <yang.shi@linaro.org>
Cc: Ning Qu <quning@gmail.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Hugh Dickins [Fri, 20 May 2016 00:12:44 +0000 (17:12 -0700)]
tmpfs: preliminary minor tidyups
Make a few cleanups in mm/shmem.c, before going on to complicate it.
shmem_alloc_page() will become more complicated: we can't afford to to
have that complication duplicated between a CONFIG_NUMA version and a
!CONFIG_NUMA version, so rearrange the #ifdef'ery there to yield a
single shmem_swapin() and a single shmem_alloc_page().
Yes, it's a shame to inflict the horrid pseudo-vma on non-NUMA
configurations, but eliminating it is a larger cleanup: I have an
alloc_pages_mpol() patchset not yet ready - mpol handling is subtle and
bug-prone, and changed yet again since my last version.
Move __SetPageLocked, __SetPageSwapBacked from shmem_getpage_gfp() to
shmem_alloc_page(): that SwapBacked flag will be useful in future, to
help to distinguish different cases appropriately.
And the SGP_DIRTY variant of SGP_CACHE is hard to understand and of
little use (IIRC it dates back to when shmem_getpage() returned the page
unlocked): kill it and do the necessary in shmem_file_read_iter().
But an arm64 build then complained that info may be uninitialized (where
shmem_getpage_gfp() deletes a freshly alloced page beyond eof), and
advancing to an "sgp <= SGP_CACHE" test jogged it back to reality.
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Andres Lagar-Cavilla <andreslc@google.com>
Cc: Yang Shi <yang.shi@linaro.org>
Cc: Ning Qu <quning@gmail.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Hugh Dickins [Fri, 20 May 2016 00:12:41 +0000 (17:12 -0700)]
mm: use __SetPageSwapBacked and dont ClearPageSwapBacked
v3.16 commit
07a427884348 ("mm: shmem: avoid atomic operation during
shmem_getpage_gfp") rightly replaced one instance of SetPageSwapBacked
by __SetPageSwapBacked, pointing out that the newly allocated page is
not yet visible to other users (except speculative get_page_unless_zero-
ers, who may not update page flags before their further checks).
That was part of a series in which Mel was focused on tmpfs profiles:
but almost all SetPageSwapBacked uses can be so optimized, with the same
justification.
Remove ClearPageSwapBacked from __read_swap_cache_async() error path:
it's not an error to free a page with PG_swapbacked set.
Follow a convention of __SetPageLocked, __SetPageSwapBacked instead of
doing it differently in different places; but that's for tidiness - if
the ordering actually mattered, we should not be using the __variants.
There's probably scope for further __SetPageFlags in other places, but
SwapBacked is the one I'm interested in at the moment.
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Andres Lagar-Cavilla <andreslc@google.com>
Cc: Yang Shi <yang.shi@linaro.org>
Cc: Ning Qu <quning@gmail.com>
Reviewed-by: Mel Gorman <mgorman@techsingularity.net>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Hugh Dickins [Fri, 20 May 2016 00:12:38 +0000 (17:12 -0700)]
mm: update_lru_size do the __mod_zone_page_state
Konstantin Khlebnikov pointed out (nearly four years ago, when lumpy
reclaim was removed) that lru_size can be updated by -nr_taken once per
call to isolate_lru_pages(), instead of page by page.
Update it inside isolate_lru_pages(), or at its two callsites? I chose
to update it at the callsites, rearranging and grouping the updates by
nr_taken and nr_scanned together in both.
With one exception, mem_cgroup_update_lru_size(,lru,) is then used where
__mod_zone_page_state(,NR_LRU_BASE+lru,) is used; and we shall be adding
some more calls in a future commit. Make the code a little smaller and
simpler by incorporating stat update in lru_size update.
The exception was move_active_pages_to_lru(), which aggregated the
pgmoved stat update separately from the individual lru_size updates; but
I still think this a simplification worth making.
However, the __mod_zone_page_state is not peculiar to mem_cgroups: so
better use the name update_lru_size, calls mem_cgroup_update_lru_size
when CONFIG_MEMCG.
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Andres Lagar-Cavilla <andreslc@google.com>
Cc: Yang Shi <yang.shi@linaro.org>
Cc: Ning Qu <quning@gmail.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Hugh Dickins [Fri, 20 May 2016 00:12:35 +0000 (17:12 -0700)]
mm: update_lru_size warn and reset bad lru_size
Though debug kernels have a VM_BUG_ON to help protect from misaccounting
lru_size, non-debug kernels are liable to wrap it around: and then the
vast unsigned long size draws page reclaim into a loop of repeatedly
doing nothing on an empty list, without even a cond_resched().
That soft lockup looks confusingly like an over-busy reclaim scenario,
with lots of contention on the lru_lock in shrink_inactive_list(): yet
has a totally different origin.
Help differentiate with a custom warning in
mem_cgroup_update_lru_size(), even in non-debug kernels; and reset the
size to avoid the lockup. But the particular bug which suggested this
change was mine alone, and since fixed.
Make it a WARN_ONCE: the first occurrence is the most informative, a
flurry may follow, yet even when rate-limited little more is learnt.
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Andres Lagar-Cavilla <andreslc@google.com>
Cc: Yang Shi <yang.shi@linaro.org>
Cc: Ning Qu <quning@gmail.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Andres Lagar-Cavilla <andreslc@google.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Konstantin Khlebnikov [Fri, 20 May 2016 00:12:32 +0000 (17:12 -0700)]
mm/mmap: kill hook arch_rebalance_pgtables()
Nobody uses it.
Signed-off-by: Konstantin Khlebnikov <koct9i@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joonsoo Kim [Fri, 20 May 2016 00:12:29 +0000 (17:12 -0700)]
mm/vmstat: make node_page_state() handles all zones by itself
node_page_state() manually adds statistics per each zone and returns
total value for all zones. Whenever we add a new zone, we need to
consider this function and it's really troublesome. Make it handle all
zones by itself.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Laura Abbott <lauraa@codeaurora.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joonsoo Kim [Fri, 20 May 2016 00:12:26 +0000 (17:12 -0700)]
mm/highmem: make nr_free_highpages() handles all highmem zones by itself
nr_free_highpages() manually adds statistics per each highmem zone and
returns a total value for them. Whenever we add a new highmem zone, we
need to consider this function and it's really troublesome. Make it
handle all highmem zones by itself.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Laura Abbott <lauraa@codeaurora.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joonsoo Kim [Fri, 20 May 2016 00:12:23 +0000 (17:12 -0700)]
mm/page_alloc: correct highmem memory statistics
ZONE_MOVABLE could be treated as highmem so we need to consider it for
accurate statistics. And, in following patches, ZONE_CMA will be
introduced and it can be treated as highmem, too. So, instead of
manually adding stat of ZONE_MOVABLE, looping all zones and check
whether the zone is highmem or not and add stat of the zone which can be
treated as highmem.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Laura Abbott <lauraa@codeaurora.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joonsoo Kim [Fri, 20 May 2016 00:12:20 +0000 (17:12 -0700)]
mm/writeback: correct dirty page calculation for highmem
ZONE_MOVABLE could be treated as highmem so we need to consider it for
accurate calculation of dirty pages. And, in following patches,
ZONE_CMA will be introduced and it can be treated as highmem, too. So,
instead of manually adding stat of ZONE_MOVABLE, looping all zones and
check whether the zone is highmem or not and add stat of the zone which
can be treated as highmem.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Laura Abbott <lauraa@codeaurora.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joonsoo Kim [Fri, 20 May 2016 00:12:16 +0000 (17:12 -0700)]
power: add zone range overlapping check
There is a system thats node's pfns are overlapped as follows:
-----pfn-------->
N0 N1 N2 N0 N1 N2
Therefore, we need to care this overlapping when iterating pfn range.
mark_free_pages() iterates requested zone's pfn range and unset all
range's bitmap first. And then it marks freepages in a zone to the
bitmap. If there is an overlapping zone, above unset could clear
previous marked bit and reference to this bitmap in the future will
cause the problem. To prevent it, this patch adds a zone check in
mark_free_pages().
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Rik van Riel <riel@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Laura Abbott <lauraa@codeaurora.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joonsoo Kim [Fri, 20 May 2016 00:12:13 +0000 (17:12 -0700)]
mm/page_owner: add zone range overlapping check
There is a system thats node's pfns are overlapped as follows:
-----pfn-------->
N0 N1 N2 N0 N1 N2
Therefore, we need to care this overlapping when iterating pfn range.
There are one place in page_owner.c that iterates pfn range and it
doesn't consider this overlapping. Add it.
Without this patch, above system could over count early allocated page
number before page_owner is activated.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Rik van Riel <riel@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Laura Abbott <lauraa@codeaurora.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joonsoo Kim [Fri, 20 May 2016 00:12:10 +0000 (17:12 -0700)]
mm/vmstat: add zone range overlapping check
There is a system thats node's pfns are overlapped as follows:
-----pfn-------->
N0 N1 N2 N0 N1 N2
Therefore, we need to care this overlapping when iterating pfn range.
There are two places in vmstat.c that iterates pfn range and they don't
consider this overlapping. Add it.
Without this patch, above system could over count pageblock number on a
zone.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Rik van Riel <riel@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Laura Abbott <lauraa@codeaurora.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joonsoo Kim [Fri, 20 May 2016 00:12:06 +0000 (17:12 -0700)]
mm/memory_hotplug: add comment to some functions related to memory hotplug
__offline_isolated_pages() and test_pages_isolated() are used by memory
hotplug. These functions require that range is in a single zone but
there is no code to do this because memory hotplug checks it before
calling these functions. To avoid confusing future user of these
functions, this patch adds comments to them.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Rik van Riel <riel@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Laura Abbott <lauraa@codeaurora.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joonsoo Kim [Fri, 20 May 2016 00:12:03 +0000 (17:12 -0700)]
mm/hugetlb: add same zone check in pfn_range_valid_gigantic()
This patchset deals with some problematic sites that iterate pfn ranges.
There is a system thats node's pfns are overlapped as follows:
-----pfn-------->
N0 N1 N2 N0 N1 N2
Therefore, we need to take care of this overlapping when iterating pfn
range.
I audit many iterating sites that uses pfn_valid(), pfn_valid_within(),
zone_start_pfn and etc. and others looks safe to me. This is a
preparation step for a new CMA implementation, ZONE_CMA
(https://lkml.org/lkml/2015/2/12/95), because it would be easily
overlapped with other zones. But, zone overlap check is also needed for
the general case so I send it separately.
This patch (of 5):
alloc_gigantic_page() uses alloc_contig_range() and this requires that
the requested range is in a single zone. To satisfy this requirement,
add this check to pfn_range_valid_gigantic().
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Rik van Riel <riel@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Laura Abbott <lauraa@codeaurora.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Andrew Morton [Fri, 20 May 2016 00:12:00 +0000 (17:12 -0700)]
mm: uninline page_mapped()
It's huge. Uninlining it saves 206 bytes per callsite. Shaves 4924
bytes from the x86_64 allmodconfig vmlinux.
[akpm@linux-foundation.org: coding-style fixes]
Cc: Steve Capper <steve.capper@arm.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Chanho Min [Fri, 20 May 2016 00:11:57 +0000 (17:11 -0700)]
mm/highmem: simplify is_highmem()
is_highmem() can be simplified by use of is_highmem_idx(). This patch
removes redundant code and will make it easier to maintain if the zone
policy is changed or a new zone is added.
(akpm: saves me 25 bytes of text per is_highmem() callsite)
Signed-off-by: Chanho Min <chanho.min@lge.com>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Cc: Michal Hocko <mhocko@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Vlastimil Babka [Fri, 20 May 2016 00:11:55 +0000 (17:11 -0700)]
mm, compaction: skip blocks where isolation fails in async direct compaction
The goal of direct compaction is to quickly make a high-order page
available for the pending allocation. Within an aligned block of pages
of desired order, a single allocated page that cannot be isolated for
migration means that the block cannot fully merge to a buddy page that
would satisfy the allocation request. Therefore we can reduce the
allocation stall by skipping the rest of the block immediately on
isolation failure. For async compaction, this also means a higher
chance of succeeding until it detects contention.
We however shouldn't completely sacrifice the second objective of
compaction, which is to reduce overal long-term memory fragmentation.
As a compromise, perform the eager skipping only in direct async
compaction, while sync compaction (including kcompactd) remains
thorough.
Testing was done using stress-highalloc from mmtests, configured for
order-4 GFP_KERNEL allocations:
4.6-rc1 4.6-rc1
before after
Success 1 Min 24.00 ( 0.00%) 27.00 (-12.50%)
Success 1 Mean 30.20 ( 0.00%) 31.60 ( -4.64%)
Success 1 Max 37.00 ( 0.00%) 35.00 ( 5.41%)
Success 2 Min 42.00 ( 0.00%) 32.00 ( 23.81%)
Success 2 Mean 44.00 ( 0.00%) 44.80 ( -1.82%)
Success 2 Max 48.00 ( 0.00%) 52.00 ( -8.33%)
Success 3 Min 91.00 ( 0.00%) 92.00 ( -1.10%)
Success 3 Mean 92.20 ( 0.00%) 92.80 ( -0.65%)
Success 3 Max 94.00 ( 0.00%) 93.00 ( 1.06%)
We can see that success rates are unaffected by the skipping.
4.6-rc1 4.6-rc1
before after
User 2587.42 2566.53
System 482.89 471.20
Elapsed 1395.68 1382.00
Times are not so useful metric for this benchmark as main portion is the
interfering kernel builds, but results do hint at reduced system times.
4.6-rc1 4.6-rc1
before after
Direct pages scanned 163614 159608
Kswapd pages scanned
2070139 2078790
Kswapd pages reclaimed
2061707 2069757
Direct pages reclaimed 163354 159505
Reduced direct reclaim was unintended, but could be explained by more
successful first attempt at (async) direct compaction, which is
attempted before the first reclaim attempt in __alloc_pages_slowpath().
Compaction stalls 33052 39853
Compaction success 12121 19773
Compaction failures 20931 20079
Compaction is indeed more successful, and thus less likely to get
deferred, so there are also more direct compaction stalls.
Page migrate success
3781876 3326819
Page migrate failure 45817 41774
Compaction pages isolated
7868232 6941457
Compaction migrate scanned
168160492 127269354
Compaction migrate prescanned 0 0
Compaction free scanned
2522142582 2326342620
Compaction free direct alloc 0 0
Compaction free dir. all. miss 0 0
Compaction cost 5252 4476
The patch reduces migration scanned pages by 25% thanks to the eager
skipping.
[hughd@google.com: prevent nr_isolated_* from going negative]
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Rik van Riel <riel@redhat.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Michal Hocko <mhocko@suse.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Vlastimil Babka [Fri, 20 May 2016 00:11:51 +0000 (17:11 -0700)]
mm, compaction: reduce spurious pcplist drains
Compaction drains the local pcplists each time migration scanner moves
away from a cc->order aligned block where it isolated pages for
migration, so that the pages freed by migrations can merge into higher
orders.
The detection is currently coarser than it could be. The
cc->last_migrated_pfn variable should track the lowest pfn that was
isolated for migration. But it is set to the pfn where
isolate_migratepages_block() starts scanning, which is typically the
first pfn of the pageblock. There, the scanner might fail to isolate
several order-aligned blocks, and then isolate COMPACT_CLUSTER_MAX in
another block. This would cause the pcplists drain to be performed,
although the scanner didn't yet finish the block where it isolated from.
This patch thus makes cc->last_migrated_pfn handling more accurate by
setting it to the pfn of an actually isolated page in
isolate_migratepages_block(). Although practical effects of this patch
are likely low, it arguably makes the intent of the code more obvious.
Also the next patch will make async direct compaction skip blocks more
aggressively, and draining pcplists due to skipped blocks is wasteful.
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Rik van Riel <riel@redhat.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Michal Hocko <mhocko@suse.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Vlastimil Babka [Fri, 20 May 2016 00:11:48 +0000 (17:11 -0700)]
mm, compaction: wrap calculating first and last pfn of pageblock
Compaction code has accumulated numerous instances of manual
calculations of the first (inclusive) and last (exclusive) pfn of a
pageblock (or a smaller block of given order), given a pfn within the
pageblock.
Wrap these calculations by introducing pageblock_start_pfn(pfn) and
pageblock_end_pfn(pfn) macros.
[vbabka@suse.cz: fix crash in get_pfnblock_flags_mask() from isolate_freepages():]
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Rik van Riel <riel@redhat.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Konstantin Khlebnikov [Fri, 20 May 2016 00:11:46 +0000 (17:11 -0700)]
mm/rmap: replace BUG_ON(anon_vma->degree) with VM_WARN_ON
This check effectively catches anon vma hierarchy inconsistence and some
vma corruptions. It was effective for catching corner cases in anon vma
reusing logic. For now this code seems stable so check could be hidden
under CONFIG_DEBUG_VM and replaced with WARN because it's not so fatal.
Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Suggested-by: Vasily Averin <vvs@virtuozzo.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Andrew Morton [Fri, 20 May 2016 00:11:43 +0000 (17:11 -0700)]
mm/mempolicy.c:offset_il_node() document and clarify
This code was pretty obscure and was relying upon obscure side-effects
of next_node(-1, ...) and was relying upon NUMA_NO_NODE being equal to
-1.
Clean that all up and document the function's intent.
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Xishi Qiu <qiuxishi@huawei.com>
Cc: Joonsoo Kim <js1304@gmail.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Andrew Morton [Fri, 20 May 2016 00:11:40 +0000 (17:11 -0700)]
mm/hugetlb.c: use first_memory_node
Instead of open-coding it.
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Li Zhang [Fri, 20 May 2016 00:11:37 +0000 (17:11 -0700)]
mm/page_alloc: Remove useless parameter of __free_pages_boot_core
__free_pages_boot_core has parameter pfn which is not used at all.
Remove it.
Signed-off-by: Li Zhang <zhlcindy@linux.vnet.ibm.com>
Reviewed-by: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Michal Hocko [Fri, 20 May 2016 00:11:34 +0000 (17:11 -0700)]
mm/memcontrol.c:mem_cgroup_select_victim_node(): clarify comment
> The comment seems to have not much to do with the code?
I guess the comment tries to say that the code path is triggered when we
charge the page which happens _before_ it is added to the LRU list and
so last_scanned_node might contain the stale data.
Cc: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Yaowei Bai [Fri, 20 May 2016 00:11:32 +0000 (17:11 -0700)]
mm/mempolicy.c: vma_migratable() can return bool
Make vma_migratable() return bool due to this particular function only
using either one or zero as its return value.
Signed-off-by: Yaowei Bai <baiyaowei@cmss.chinamobile.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Yaowei Bai [Fri, 20 May 2016 00:11:29 +0000 (17:11 -0700)]
mm/vmalloc.c: is_vmalloc_addr() can return bool
Make is_vmalloc_addr() return bool to improve readability due to this
particular function only using either one or zero as its return value.
Signed-off-by: Yaowei Bai <baiyaowei@cmss.chinamobile.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Yaowei Bai [Fri, 20 May 2016 00:11:26 +0000 (17:11 -0700)]
mm/memory_hotplug: is_mem_section_removable() can return bool
Make is_mem_section_removable() return bool to improve readability due
to this particular function only using either one or zero as its return
value.
Signed-off-by: Yaowei Bai <baiyaowei@cmss.chinamobile.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Yaowei Bai [Fri, 20 May 2016 00:11:23 +0000 (17:11 -0700)]
mm/hugetlb: is_vm_hugetlb_page() can return bool
Make is_vm_hugetlb_page() return bool to improve readability due to this
particular function only using either one or zero as its return value.
Signed-off-by: Yaowei Bai <baiyaowei@cmss.chinamobile.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Vaishali Thakkar [Fri, 20 May 2016 00:11:20 +0000 (17:11 -0700)]
x86: mm: use hugetlb_bad_size()
Update setup_hugepagesz() to call hugetlb_bad_size() when unsupported
hugepage size is found.
Signed-off-by: Vaishali Thakkar <vaishali.thakkar@oracle.com>
Reviewed-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Hillf Danton <hillf.zj@alibaba-inc.com>
Cc: Yaowei Bai <baiyaowei@cmss.chinamobile.com>
Cc: Dominik Dingel <dingel@linux.vnet.ibm.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Vaishali Thakkar [Fri, 20 May 2016 00:11:17 +0000 (17:11 -0700)]
tile: mm: use hugetlb_bad_size()
Update setup_hugepagesz() to call hugetlb_bad_size() when unsupported
hugepage size is found.
Signed-off-by: Vaishali Thakkar <vaishali.thakkar@oracle.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Reviewed-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Hillf Danton <hillf.zj@alibaba-inc.com>
Cc: Yaowei Bai <baiyaowei@cmss.chinamobile.com>
Cc: Dominik Dingel <dingel@linux.vnet.ibm.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Vaishali Thakkar [Fri, 20 May 2016 00:11:14 +0000 (17:11 -0700)]
powerpc: mm: use hugetlb_bad_size()
Update setup_hugepagesz() to call hugetlb_bad_size() when unsupported
hugepage size is found.
Signed-off-by: Vaishali Thakkar <vaishali.thakkar@oracle.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Reviewed-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Hillf Danton <hillf.zj@alibaba-inc.com>
Cc: Yaowei Bai <baiyaowei@cmss.chinamobile.com>
Cc: Dominik Dingel <dingel@linux.vnet.ibm.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Vaishali Thakkar [Fri, 20 May 2016 00:11:11 +0000 (17:11 -0700)]
metag: mm: use hugetlb_bad_size()
Update setup_hugepagesz() to call hugetlb_bad_size() when unsupported
hugepage size is found.
Signed-off-by: Vaishali Thakkar <vaishali.thakkar@oracle.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Reviewed-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Hillf Danton <hillf.zj@alibaba-inc.com>
Cc: Yaowei Bai <baiyaowei@cmss.chinamobile.com>
Cc: Dominik Dingel <dingel@linux.vnet.ibm.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Vaishali Thakkar [Fri, 20 May 2016 00:11:08 +0000 (17:11 -0700)]
arm64: mm: use hugetlb_bad_size()
Update setup_hugepagesz() to call hugetlb_bad_size() when unsupported
hugepage size is found.
Signed-off-by: Vaishali Thakkar <vaishali.thakkar@oracle.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Reviewed-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Hillf Danton <hillf.zj@alibaba-inc.com>
Cc: Yaowei Bai <baiyaowei@cmss.chinamobile.com>
Cc: Dominik Dingel <dingel@linux.vnet.ibm.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Vaishali Thakkar [Fri, 20 May 2016 00:11:04 +0000 (17:11 -0700)]
mm/hugetlb: introduce hugetlb_bad_size()
When any unsupported hugepage size is specified, 'hugepagesz=' and
'hugepages=' should be ignored during command line parsing until any
supported hugepage size is found. But currently incorrect number of
hugepages are allocated when unsupported size is specified as it fails
to ignore the 'hugepages=' command.
Test case:
Note that this is specific to x86 architecture.
Boot the kernel with command line option 'hugepagesz=256M hugepages=X'.
After boot, dmesg output shows that X number of hugepages of the size 2M
is pre-allocated instead of 0.
So, to handle such command line options, introduce new routine
hugetlb_bad_size. The routine hugetlb_bad_size sets the global variable
parsed_valid_hugepagesz. We are using parsed_valid_hugepagesz to save
the state when unsupported hugepagesize is found so that we can ignore
the 'hugepages=' parameters after that and then reset the variable when
supported hugepage size is found.
The routine hugetlb_bad_size can be called while setting 'hugepagesz='
parameter in an architecture specific code.
Signed-off-by: Vaishali Thakkar <vaishali.thakkar@oracle.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Reviewed-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Hillf Danton <hillf.zj@alibaba-inc.com>
Cc: Yaowei Bai <baiyaowei@cmss.chinamobile.com>
Cc: Dominik Dingel <dingel@linux.vnet.ibm.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Mike Kravetz [Fri, 20 May 2016 00:11:01 +0000 (17:11 -0700)]
mm/hugetlb: optimize minimum size (min_size) accounting
It was observed that minimum size accounting associated with the
hugetlbfs min_size mount option may not perform optimally and as
expected. As huge pages/reservations are released from the filesystem
and given back to the global pools, they are reserved for subsequent
filesystem use as long as the subpool reserved count is less than
subpool minimum size. It does not take into account used pages within
the filesystem. The filesystem size limits are not exceeded and this is
technically not a bug. However, better behavior would be to wait for
the number of used pages/reservations associated with the filesystem to
drop below the minimum size before taking reservations to satisfy
minimum size.
An optimization is also made to the hugepage_subpool_get_pages() routine
which is called when pages/reservations are allocated. This does not
change behavior, but simply avoids the accounting if all reservations
have already been taken (subpool reserved count == 0).
Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
Acked-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Hillf Danton <hillf.zj@alibaba-inc.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Andrew Morton [Fri, 20 May 2016 00:10:58 +0000 (17:10 -0700)]
include/linux/nodemask.h: create next_node_in() helper
Lots of code does
node = next_node(node, XXX);
if (node == MAX_NUMNODES)
node = first_node(XXX);
so create next_node_in() to do this and use it in various places.
[mhocko@suse.com: use next_node_in() helper]
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Michal Hocko <mhocko@kernel.org>
Signed-off-by: Michal Hocko <mhocko@suse.com>
Cc: Xishi Qiu <qiuxishi@huawei.com>
Cc: Joonsoo Kim <js1304@gmail.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Laura Abbott <lauraa@codeaurora.org>
Cc: Hui Zhu <zhuhui@xiaomi.com>
Cc: Wang Xiaoqiang <wangxq10@lzu.edu.cn>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Rasmus Villemoes [Fri, 20 May 2016 00:10:55 +0000 (17:10 -0700)]
include/linux: apply __malloc attribute
Attach the malloc attribute to a few allocation functions. This helps
gcc generate better code by telling it that the return value doesn't
alias any existing pointers (which is even more valuable given the
pessimizations implied by -fno-strict-aliasing).
A simple example of what this allows gcc to do can be seen by looking at
the last part of drm_atomic_helper_plane_reset:
plane->state = kzalloc(sizeof(*plane->state), GFP_KERNEL);
if (plane->state) {
plane->state->plane = plane;
plane->state->rotation = BIT(DRM_ROTATE_0);
}
which compiles to
e8 99 bf d6 ff callq
ffffffff8116d540 <kmem_cache_alloc_trace>
48 85 c0 test %rax,%rax
48 89 83 40 02 00 00 mov %rax,0x240(%rbx)
74 11 je
ffffffff814015c4 <drm_atomic_helper_plane_reset+0x64>
48 89 18 mov %rbx,(%rax)
48 8b 83 40 02 00 00 mov 0x240(%rbx),%rax [*]
c7 40 40 01 00 00 00 movl $0x1,0x40(%rax)
With this patch applied, the instruction at [*] is elided, since the
store to plane->state->plane is known to not alter the value of
plane->state.
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Rasmus Villemoes [Fri, 20 May 2016 00:10:52 +0000 (17:10 -0700)]
compiler.h: add support for malloc attribute
gcc as far back as at least 3.04 documents the function attribute
__malloc__. Add a shorthand for attaching that to a function
declaration. This was also suggested by Andi Kleen way back in 2002
[1], but didn't get applied, perhaps because gcc at that time generated
the exact same code with and without this attribute.
This attribute tells the compiler that the return value (if non-NULL)
can be assumed not to alias any other valid pointers at the time of the
call.
Please note that the documentation for a range of gcc versions (starting
from around 4.7) contained a somewhat confusing and self-contradicting
text:
The malloc attribute is used to tell the compiler that a function may
be treated as if any non-NULL pointer it returns cannot alias any other
pointer valid when the function returns and *that the memory has
undefined content*. [...] Standard functions with this property include
malloc and *calloc*.
(emphasis mine). The intended meaning has later been clarified [2]:
This tells the compiler that a function is malloc-like, i.e., that the
pointer P returned by the function cannot alias any other pointer valid
when the function returns, and moreover no pointers to valid objects
occur in any storage addressed by P.
What this means is that we can apply the attribute to kmalloc and
friends, and it is ok for the returned memory to have well-defined
contents (__GFP_ZERO). But it is not ok to apply it to kmemdup(), nor
to other functions which both allocate and possibly initialize the
memory with existing pointers. So unless someone is doing something
pretty perverted kstrdup() should also be a fine candidate.
[1] http://thread.gmane.org/gmane.linux.kernel/57172
[2] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=56955
Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joonsoo Kim [Fri, 20 May 2016 00:10:49 +0000 (17:10 -0700)]
mm: rename _count, field of the struct page, to _refcount
Many developers already know that field for reference count of the
struct page is _count and atomic type. They would try to handle it
directly and this could break the purpose of page reference count
tracepoint. To prevent direct _count modification, this patch rename it
to _refcount and add warning message on the code. After that, developer
who need to handle reference count will find that field should not be
accessed directly.
[akpm@linux-foundation.org: fix comments, per Vlastimil]
[akpm@linux-foundation.org: Documentation/vm/transhuge.txt too]
[sfr@canb.auug.org.au: sync ethernet driver changes]
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Hugh Dickins <hughd@google.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Sunil Goutham <sgoutham@cavium.com>
Cc: Chris Metcalf <cmetcalf@mellanox.com>
Cc: Manish Chopra <manish.chopra@qlogic.com>
Cc: Yuval Mintz <yuval.mintz@qlogic.com>
Cc: Tariq Toukan <tariqt@mellanox.com>
Cc: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joonsoo Kim [Fri, 20 May 2016 00:10:46 +0000 (17:10 -0700)]
mm/page_ref: use page_ref helper instead of direct modification of _count
page_reference manipulation functions are introduced to track down
reference count change of the page. Use it instead of direct
modification of _count.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Hugh Dickins <hughd@google.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Sunil Goutham <sgoutham@cavium.com>
Cc: Chris Metcalf <cmetcalf@mellanox.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Li Peng [Fri, 20 May 2016 00:10:43 +0000 (17:10 -0700)]
mm/slub.c: fix sysfs filename in comment
/sys/kernel/slab/xx/defrag_ratio should be remote_node_defrag_ratio.
Link: http://lkml.kernel.org/r/1463449242-5366-1-git-send-email-lip@dtdream.com
Signed-off-by: Li Peng <lip@dtdream.com>
Acked-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Yang Shi [Fri, 20 May 2016 00:10:41 +0000 (17:10 -0700)]
mm: slab: remove ZONE_DMA_FLAG
Now we have IS_ENABLED helper to check if a Kconfig option is enabled or
not, so ZONE_DMA_FLAG sounds no longer useful.
And, the use of ZONE_DMA_FLAG in slab looks pointless according to the
comment [1] from Johannes Weiner, so remove them and ORing passed in
flags with the cache gfp flags has been done in kmem_getpages().
[1] https://lkml.org/lkml/2014/9/25/553
Link: http://lkml.kernel.org/r/1462381297-11009-1-git-send-email-yang.shi@linaro.org
Signed-off-by: Yang Shi <yang.shi@linaro.org>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Thomas Garnier [Fri, 20 May 2016 00:10:37 +0000 (17:10 -0700)]
mm: SLAB freelist randomization
Provides an optional config (CONFIG_SLAB_FREELIST_RANDOM) to randomize
the SLAB freelist. The list is randomized during initialization of a
new set of pages. The order on different freelist sizes is pre-computed
at boot for performance. Each kmem_cache has its own randomized
freelist. Before pre-computed lists are available freelists are
generated dynamically. This security feature reduces the predictability
of the kernel SLAB allocator against heap overflows rendering attacks
much less stable.
For example this attack against SLUB (also applicable against SLAB)
would be affected:
https://jon.oberheide.org/blog/2010/09/10/linux-kernel-can-slub-overflow/
Also, since v4.6 the freelist was moved at the end of the SLAB. It
means a controllable heap is opened to new attacks not yet publicly
discussed. A kernel heap overflow can be transformed to multiple
use-after-free. This feature makes this type of attack harder too.
To generate entropy, we use get_random_bytes_arch because 0 bits of
entropy is available in the boot stage. In the worse case this function
will fallback to the get_random_bytes sub API. We also generate a shift
random number to shift pre-computed freelist for each new set of pages.
The config option name is not specific to the SLAB as this approach will
be extended to other allocators like SLUB.
Performance results highlighted no major changes:
Hackbench (running 90 10 times):
Before average: 0.0698
After average: 0.0663 (-5.01%)
slab_test 1 run on boot. Difference only seen on the 2048 size test
being the worse case scenario covered by freelist randomization. New
slab pages are constantly being created on the 10000 allocations.
Variance should be mainly due to getting new pages every few
allocations.
Before:
Single thread testing
=====================
1. Kmalloc: Repeatedly allocate then free test
10000 times kmalloc(8) -> 99 cycles kfree -> 112 cycles
10000 times kmalloc(16) -> 109 cycles kfree -> 140 cycles
10000 times kmalloc(32) -> 129 cycles kfree -> 137 cycles
10000 times kmalloc(64) -> 141 cycles kfree -> 141 cycles
10000 times kmalloc(128) -> 152 cycles kfree -> 148 cycles
10000 times kmalloc(256) -> 195 cycles kfree -> 167 cycles
10000 times kmalloc(512) -> 257 cycles kfree -> 199 cycles
10000 times kmalloc(1024) -> 393 cycles kfree -> 251 cycles
10000 times kmalloc(2048) -> 649 cycles kfree -> 228 cycles
10000 times kmalloc(4096) -> 806 cycles kfree -> 370 cycles
10000 times kmalloc(8192) -> 814 cycles kfree -> 411 cycles
10000 times kmalloc(16384) -> 892 cycles kfree -> 455 cycles
2. Kmalloc: alloc/free test
10000 times kmalloc(8)/kfree -> 121 cycles
10000 times kmalloc(16)/kfree -> 121 cycles
10000 times kmalloc(32)/kfree -> 121 cycles
10000 times kmalloc(64)/kfree -> 121 cycles
10000 times kmalloc(128)/kfree -> 121 cycles
10000 times kmalloc(256)/kfree -> 119 cycles
10000 times kmalloc(512)/kfree -> 119 cycles
10000 times kmalloc(1024)/kfree -> 119 cycles
10000 times kmalloc(2048)/kfree -> 119 cycles
10000 times kmalloc(4096)/kfree -> 121 cycles
10000 times kmalloc(8192)/kfree -> 119 cycles
10000 times kmalloc(16384)/kfree -> 119 cycles
After:
Single thread testing
=====================
1. Kmalloc: Repeatedly allocate then free test
10000 times kmalloc(8) -> 130 cycles kfree -> 86 cycles
10000 times kmalloc(16) -> 118 cycles kfree -> 86 cycles
10000 times kmalloc(32) -> 121 cycles kfree -> 85 cycles
10000 times kmalloc(64) -> 176 cycles kfree -> 102 cycles
10000 times kmalloc(128) -> 178 cycles kfree -> 100 cycles
10000 times kmalloc(256) -> 205 cycles kfree -> 109 cycles
10000 times kmalloc(512) -> 262 cycles kfree -> 136 cycles
10000 times kmalloc(1024) -> 342 cycles kfree -> 157 cycles
10000 times kmalloc(2048) -> 701 cycles kfree -> 238 cycles
10000 times kmalloc(4096) -> 803 cycles kfree -> 364 cycles
10000 times kmalloc(8192) -> 835 cycles kfree -> 404 cycles
10000 times kmalloc(16384) -> 896 cycles kfree -> 441 cycles
2. Kmalloc: alloc/free test
10000 times kmalloc(8)/kfree -> 121 cycles
10000 times kmalloc(16)/kfree -> 121 cycles
10000 times kmalloc(32)/kfree -> 123 cycles
10000 times kmalloc(64)/kfree -> 142 cycles
10000 times kmalloc(128)/kfree -> 121 cycles
10000 times kmalloc(256)/kfree -> 119 cycles
10000 times kmalloc(512)/kfree -> 119 cycles
10000 times kmalloc(1024)/kfree -> 119 cycles
10000 times kmalloc(2048)/kfree -> 119 cycles
10000 times kmalloc(4096)/kfree -> 119 cycles
10000 times kmalloc(8192)/kfree -> 119 cycles
10000 times kmalloc(16384)/kfree -> 119 cycles
[akpm@linux-foundation.org: propagate gfp_t into cache_random_seq_create()]
Signed-off-by: Thomas Garnier <thgarnie@google.com>
Acked-by: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Greg Thelen <gthelen@google.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Vladimir Davydov [Fri, 20 May 2016 00:10:34 +0000 (17:10 -0700)]
mm/slub.c: replace kick_all_cpus_sync() with synchronize_sched() in kmem_cache_shrink()
When we call __kmem_cache_shrink on memory cgroup removal, we need to
synchronize kmem_cache->cpu_partial update with put_cpu_partial that
might be running on other cpus. Currently, we achieve that by using
kick_all_cpus_sync, which works as a system wide memory barrier. Though
fast it is, this method has a flaw - it issues a lot of IPIs, which
might hurt high performance or real-time workloads.
To fix this, let's replace kick_all_cpus_sync with synchronize_sched.
Although the latter one may take much longer to finish, it shouldn't be
a problem in this particular case, because memory cgroups are destroyed
asynchronously from a workqueue so that no user visible effects should
be introduced. OTOH, it will save us from excessive IPIs when someone
removes a cgroup.
Anyway, even if using synchronize_sched turns out to take too long, we
can always introduce a kind of __kmem_cache_shrink batching so that this
method would only be called once per one cgroup destruction (not per
each per memcg kmem cache as it is now).
Signed-off-by: Vladimir Davydov <vdavydov@virtuozzo.com>
Reported-by: Peter Zijlstra <peterz@infradead.org>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joonsoo Kim [Fri, 20 May 2016 00:10:31 +0000 (17:10 -0700)]
mm/slab: lockless decision to grow cache
To check whether free objects exist or not precisely, we need to grab a
lock. But, accuracy isn't that important because race window would be
even small and if there is too much free object, cache reaper would reap
it. So, this patch makes the check for free object exisistence not to
hold a lock. This will reduce lock contention in heavily allocation
case.
Note that until now, n->shared can be freed during the processing by
writing slabinfo, but, with some trick in this patch, we can access it
freely within interrupt disabled period.
Below is the result of concurrent allocation/free in slab allocation
benchmark made by Christoph a long time ago. I make the output simpler.
The number shows cycle count during alloc/free respectively so less is
better.
* Before
Kmalloc N*alloc N*free(32): Average=248/966
Kmalloc N*alloc N*free(64): Average=261/949
Kmalloc N*alloc N*free(128): Average=314/1016
Kmalloc N*alloc N*free(256): Average=741/1061
Kmalloc N*alloc N*free(512): Average=1246/1152
Kmalloc N*alloc N*free(1024): Average=2437/1259
Kmalloc N*alloc N*free(2048): Average=4980/1800
Kmalloc N*alloc N*free(4096): Average=9000/2078
* After
Kmalloc N*alloc N*free(32): Average=344/792
Kmalloc N*alloc N*free(64): Average=347/882
Kmalloc N*alloc N*free(128): Average=390/959
Kmalloc N*alloc N*free(256): Average=393/1067
Kmalloc N*alloc N*free(512): Average=683/1229
Kmalloc N*alloc N*free(1024): Average=1295/1325
Kmalloc N*alloc N*free(2048): Average=2513/1664
Kmalloc N*alloc N*free(4096): Average=4742/2172
It shows that allocation performance decreases for the object size up to
128 and it may be due to extra checks in cache_alloc_refill(). But,
with considering improvement of free performance, net result looks the
same. Result for other size class looks very promising, roughly, 50%
performance improvement.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joonsoo Kim [Fri, 20 May 2016 00:10:29 +0000 (17:10 -0700)]
mm/slab: refill cpu cache through a new slab without holding a node lock
Until now, cache growing makes a free slab on node's slab list and then
we can allocate free objects from it. This necessarily requires to hold
a node lock which is very contended. If we refill cpu cache before
attaching it to node's slab list, we can avoid holding a node lock as
much as possible because this newly allocated slab is only visible to
the current task. This will reduce lock contention.
Below is the result of concurrent allocation/free in slab allocation
benchmark made by Christoph a long time ago. I make the output simpler.
The number shows cycle count during alloc/free respectively so less is
better.
* Before
Kmalloc N*alloc N*free(32): Average=355/750
Kmalloc N*alloc N*free(64): Average=452/812
Kmalloc N*alloc N*free(128): Average=559/1070
Kmalloc N*alloc N*free(256): Average=1176/980
Kmalloc N*alloc N*free(512): Average=1939/1189
Kmalloc N*alloc N*free(1024): Average=3521/1278
Kmalloc N*alloc N*free(2048): Average=7152/1838
Kmalloc N*alloc N*free(4096): Average=13438/2013
* After
Kmalloc N*alloc N*free(32): Average=248/966
Kmalloc N*alloc N*free(64): Average=261/949
Kmalloc N*alloc N*free(128): Average=314/1016
Kmalloc N*alloc N*free(256): Average=741/1061
Kmalloc N*alloc N*free(512): Average=1246/1152
Kmalloc N*alloc N*free(1024): Average=2437/1259
Kmalloc N*alloc N*free(2048): Average=4980/1800
Kmalloc N*alloc N*free(4096): Average=9000/2078
It shows that contention is reduced for all the object sizes and
performance increases by 30 ~ 40%.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joonsoo Kim [Fri, 20 May 2016 00:10:26 +0000 (17:10 -0700)]
mm/slab: separate cache_grow() to two parts
This is a preparation step to implement lockless allocation path when
there is no free objects in kmem_cache.
What we'd like to do here is to refill cpu cache without holding a node
lock. To accomplish this purpose, refill should be done after new slab
allocation but before attaching the slab to the management list. So,
this patch separates cache_grow() to two parts, allocation and attaching
to the list in order to add some code inbetween them in the following
patch.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joonsoo Kim [Fri, 20 May 2016 00:10:23 +0000 (17:10 -0700)]
mm/slab: make cache_grow() handle the page allocated on arbitrary node
Currently, cache_grow() assumes that allocated page's nodeid would be
same with parameter nodeid which is used for allocation request. If we
discard this assumption, we can handle fallback_alloc() case gracefully.
So, this patch makes cache_grow() handle the page allocated on arbitrary
node and clean-up relevant code.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joonsoo Kim [Fri, 20 May 2016 00:10:20 +0000 (17:10 -0700)]
mm/slab: racy access/modify the slab color
Slab color isn't needed to be changed strictly. Because locking for
changing slab color could cause more lock contention so this patch
implements racy access/modify the slab color. This is a preparation
step to implement lockless allocation path when there is no free objects
in the kmem_cache.
Below is the result of concurrent allocation/free in slab allocation
benchmark made by Christoph a long time ago. I make the output simpler.
The number shows cycle count during alloc/free respectively so less is
better.
* Before
Kmalloc N*alloc N*free(32): Average=365/806
Kmalloc N*alloc N*free(64): Average=452/690
Kmalloc N*alloc N*free(128): Average=736/886
Kmalloc N*alloc N*free(256): Average=1167/985
Kmalloc N*alloc N*free(512): Average=2088/1125
Kmalloc N*alloc N*free(1024): Average=4115/1184
Kmalloc N*alloc N*free(2048): Average=8451/1748
Kmalloc N*alloc N*free(4096): Average=16024/2048
* After
Kmalloc N*alloc N*free(32): Average=355/750
Kmalloc N*alloc N*free(64): Average=452/812
Kmalloc N*alloc N*free(128): Average=559/1070
Kmalloc N*alloc N*free(256): Average=1176/980
Kmalloc N*alloc N*free(512): Average=1939/1189
Kmalloc N*alloc N*free(1024): Average=3521/1278
Kmalloc N*alloc N*free(2048): Average=7152/1838
Kmalloc N*alloc N*free(4096): Average=13438/2013
It shows that contention is reduced for object size >= 1024 and
performance increases by roughly 15%.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Acked-by: Christoph Lameter <cl@linux.com>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joonsoo Kim [Fri, 20 May 2016 00:10:17 +0000 (17:10 -0700)]
mm/slab: don't keep free slabs if free_objects exceeds free_limit
Currently, determination to free a slab is done whenever each freed
object is put into the slab. This has a following problem.
Assume free_limit = 10 and nr_free = 9.
Free happens as following sequence and nr_free changes as following.
free(become a free slab) free(not become a free slab) nr_free: 9 -> 10
(at first free) -> 11 (at second free)
If we try to check if we can free current slab or not on each object
free, we can't free any slab in this situation because current slab
isn't a free slab when nr_free exceed free_limit (at second free) even
if there is a free slab.
However, if we check it lastly, we can free 1 free slab.
This problem would cause to keep too much memory in the slab subsystem.
This patch try to fix it by checking number of free object after all
free work is done. If there is free slab at that time, we can free slab
as much as possible so we keep free slab as minimal.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joonsoo Kim [Fri, 20 May 2016 00:10:14 +0000 (17:10 -0700)]
mm/slab: clean-up kmem_cache_node setup
There are mostly same code for setting up kmem_cache_node either in
cpuup_prepare() or alloc_kmem_cache_node(). Factor out and clean-up
them.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Tested-by: Nishanth Menon <nm@ti.com>
Tested-by: Jon Hunter <jonathanh@nvidia.com>
Acked-by: Christoph Lameter <cl@linux.com>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joonsoo Kim [Fri, 20 May 2016 00:10:11 +0000 (17:10 -0700)]
mm/slab: factor out kmem_cache_node initialization code
It can be reused on other place, so factor out it. Following patch will
use it.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Acked-by: Christoph Lameter <cl@linux.com>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joonsoo Kim [Fri, 20 May 2016 00:10:08 +0000 (17:10 -0700)]
mm/slab: drain the free slab as much as possible
slabs_tofree() implies freeing all free slab. We can do it with just
providing INT_MAX.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Acked-by: Christoph Lameter <cl@linux.com>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joonsoo Kim [Fri, 20 May 2016 00:10:05 +0000 (17:10 -0700)]
mm/slab: remove BAD_ALIEN_MAGIC again
Initial attemp to remove BAD_ALIEN_MAGIC is once reverted by 'commit
edcad2509550 ("Revert "slab: remove BAD_ALIEN_MAGIC"")' because it
causes a problem on m68k which has many node but !CONFIG_NUMA. In this
case, although alien cache isn't used at all but to cope with some
initialization path, garbage value is used and that is BAD_ALIEN_MAGIC.
Now, this patch set use_alien_caches to 0 when !CONFIG_NUMA, there is no
initialization path problem so we don't need BAD_ALIEN_MAGIC at all. So
remove it.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Tested-by: Geert Uytterhoeven <geert@linux-m68k.org>
Acked-by: Christoph Lameter <cl@linux.com>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joonsoo Kim [Fri, 20 May 2016 00:10:02 +0000 (17:10 -0700)]
mm/slab: fix the theoretical race by holding proper lock
While processing concurrent allocation, SLAB could be contended a lot
because it did a lots of work with holding a lock. This patchset try to
reduce the number of critical section to reduce lock contention. Major
changes are lockless decision to allocate more slab and lockless cpu
cache refill from the newly allocated slab.
Below is the result of concurrent allocation/free in slab allocation
benchmark made by Christoph a long time ago. I make the output simpler.
The number shows cycle count during alloc/free respectively so less is
better.
* Before
Kmalloc N*alloc N*free(32): Average=365/806
Kmalloc N*alloc N*free(64): Average=452/690
Kmalloc N*alloc N*free(128): Average=736/886
Kmalloc N*alloc N*free(256): Average=1167/985
Kmalloc N*alloc N*free(512): Average=2088/1125
Kmalloc N*alloc N*free(1024): Average=4115/1184
Kmalloc N*alloc N*free(2048): Average=8451/1748
Kmalloc N*alloc N*free(4096): Average=16024/2048
* After
Kmalloc N*alloc N*free(32): Average=344/792
Kmalloc N*alloc N*free(64): Average=347/882
Kmalloc N*alloc N*free(128): Average=390/959
Kmalloc N*alloc N*free(256): Average=393/1067
Kmalloc N*alloc N*free(512): Average=683/1229
Kmalloc N*alloc N*free(1024): Average=1295/1325
Kmalloc N*alloc N*free(2048): Average=2513/1664
Kmalloc N*alloc N*free(4096): Average=4742/2172
It shows that performance improves greatly (roughly more than 50%) for
the object class whose size is more than 128 bytes.
This patch (of 11):
If we don't hold neither the slab_mutex nor the node lock, node's shared
array cache could be freed and re-populated. If __kmem_cache_shrink()
is called at the same time, it will call drain_array() with n->shared
without holding node lock so problem can happen. This patch fix the
situation by holding the node lock before trying to drain the shared
array.
In addition, add a debug check to confirm that n->shared access race
doesn't exist.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Arnd Bergmann [Fri, 20 May 2016 00:09:59 +0000 (17:09 -0700)]
kernel/padata.c: hide unused functions
A recent cleanup removed some exported functions that were not used
anywhere, which in turn exposed the fact that some other functions in
the same file are only used in some configurations.
We now get a warning about them when CONFIG_HOTPLUG_CPU is disabled:
kernel/padata.c:670:12: error: '__padata_remove_cpu' defined but not used [-Werror=unused-function]
static int __padata_remove_cpu(struct padata_instance *pinst, int cpu)
^~~~~~~~~~~~~~~~~~~
kernel/padata.c:650:12: error: '__padata_add_cpu' defined but not used [-Werror=unused-function]
static int __padata_add_cpu(struct padata_instance *pinst, int cpu)
This rearranges the code so the __padata_remove_cpu/__padata_add_cpu
functions are within the #ifdef that protects the code that calls them.
[akpm@linux-foundation.org: coding-style fixes]
Fixes:
4ba6d78c671e ("kernel/padata.c: removed unused code")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Cc: Richard Cochran <rcochran@linutronix.de>
Cc: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Richard Cochran [Fri, 20 May 2016 00:09:56 +0000 (17:09 -0700)]
kernel/padata.c: removed unused code
By accident I stumbled across code that has never been used. This
driver has EXPORT_SYMBOL functions, and the only user of the code is
pcrypt.c, but this only uses a subset of the exported symbols.
According to 'git log -G', the functions, padata_set_cpumasks,
padata_add_cpu, and padata_remove_cpu have never been used since they
were first introduced. This patch removes the unused code.
On one 64 bit build, with CRYPTO_PCRYPT built in, the text is more than
4k smaller.
kbuild_hp> size $KBUILD_OUTPUT/vmlinux
text data bss dec hex filename
10566658 4678360 1122304 16367322 f9beda vmlinux
10561984 4678360 1122304 16362648 f9ac98 vmlinux
On another config, 32 bit, the saving is about 0.5k bytes.
kbuild_hp-x86> size $KBUILD_OUTPUT/vmlinux
6012005 2409513 2785280 11206798 ab008e vmlinux
6011491 2409513 2785280 11206284 aafe8c vmlinux
Signed-off-by: Richard Cochran <rcochran@linutronix.de>
Cc: Steffen Klassert <steffen.klassert@secunet.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Guozhonghua [Fri, 20 May 2016 00:09:53 +0000 (17:09 -0700)]
ocfs2: clean up an unneeded goto in ocfs2_put_slot()
The goto is not useful in ocfs2_put_slot(), so delete it.
Signed-off-by: Guozhonghua <guozhonghua@h3c.com>
Cc: Mark Fasheh <mfasheh@suse.de>
Cc: Joel Becker <jlbec@evilplan.org>
Cc: Junxiao Bi <junxiao.bi@oracle.com>
Cc: Joseph Qi <joseph.qi@huawei.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Jun Piao [Fri, 20 May 2016 00:09:50 +0000 (17:09 -0700)]
ocfs2: clean up unused parameter 'count' in o2hb_read_block_input()
Clean up unused parameter 'count' in o2hb_read_block_input().
Signed-off-by: Jun Piao <piaojun@huawei.com>
Reviewed-by: Joseph Qi <joseph.qi@huawei.com>
Cc: Mark Fasheh <mfasheh@suse.de>
Cc: Joel Becker <jlbec@evilplan.org>
Cc: Junxiao Bi <junxiao.bi@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
piaojun [Fri, 20 May 2016 00:09:47 +0000 (17:09 -0700)]
ocfs2: clean up an unused variable 'wants_rotate' in ocfs2_truncate_rec
Clean up an unused variable 'wants_rotate' in ocfs2_truncate_rec.
Signed-off-by: Jun Piao <piaojun@huawei.com>
Reviewed-by: Joseph Qi <joseph.qi@huawei.com>
Cc: Mark Fasheh <mfasheh@suse.de>
Cc: Joel Becker <jlbec@evilplan.org>
Cc: Junxiao Bi <junxiao.bi@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Guozhonghua [Fri, 20 May 2016 00:09:44 +0000 (17:09 -0700)]
ocfs2: fix comment in struct ocfs2_extended_slot
The comment in ocfs2_extended_slot has the offset wrong.
Cc: Mark Fasheh <mfasheh@suse.de>
Cc: Joel Becker <jlbec@evilplan.org>
Cc: Junxiao Bi <junxiao.bi@oracle.com>
Cc: Joseph Qi <joseph.qi@huawei.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Du, Changbin [Fri, 20 May 2016 00:09:41 +0000 (17:09 -0700)]
debugobjects: insulate non-fixup logic related to static obj from fixup callbacks
When activating a static object we need make sure that the object is
tracked in the object tracker. If it is a non-static object then the
activation is illegal.
In previous implementation, each subsystem need take care of this in
their fixup callbacks. Actually we can put it into debugobjects core.
Thus we can save duplicated code, and have *pure* fixup callbacks.
To achieve this, a new callback "is_static_object" is introduced to let
the type specific code decide whether a object is static or not. If
yes, we take it into object tracker, otherwise give warning and invoke
fixup callback.
This change has paassed debugobjects selftest, and I also do some test
with all debugobjects supports enabled.
At last, I have a concern about the fixups that can it change the object
which is in incorrect state on fixup? Because the 'addr' may not point
to any valid object if a non-static object is not tracked. Then Change
such object can overwrite someone's memory and cause unexpected
behaviour. For example, the timer_fixup_activate bind timer to function
stub_timer.
Link: http://lkml.kernel.org/r/1462576157-14539-1-git-send-email-changbin.du@intel.com
[changbin.du@intel.com: improve code comments where invoke the new is_static_object callback]
Link: http://lkml.kernel.org/r/1462777431-8171-1-git-send-email-changbin.du@intel.com
Signed-off-by: Du, Changbin <changbin.du@intel.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Josh Triplett <josh@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tejun Heo <tj@kernel.org>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Du, Changbin [Fri, 20 May 2016 00:09:38 +0000 (17:09 -0700)]
Documentation: update debugobjects doc
Update documentation creangponding to change(debugobjects: make fixup
functions return bool instead of int).
Signed-off-by: Du, Changbin <changbin.du@intel.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Josh Triplett <josh@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tejun Heo <tj@kernel.org>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Du, Changbin [Fri, 20 May 2016 00:09:35 +0000 (17:09 -0700)]
percpu_counter: update debugobjects fixup callbacks return type
Update the return type to use bool instead of int, corresponding to
cheange (debugobjects: make fixup functions return bool instead of int).
Signed-off-by: Du, Changbin <changbin.du@intel.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Josh Triplett <josh@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tejun Heo <tj@kernel.org>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Du, Changbin [Fri, 20 May 2016 00:09:32 +0000 (17:09 -0700)]
rcu: update debugobjects fixup callbacks return type
Update the return type to use bool instead of int, corresponding to
cheange (debugobjects: make fixup functions return bool instead of int).
Signed-off-by: Du, Changbin <changbin.du@intel.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Josh Triplett <josh@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tejun Heo <tj@kernel.org>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Du, Changbin [Fri, 20 May 2016 00:09:29 +0000 (17:09 -0700)]
timer: update debugobjects fixup callbacks return type
Update the return type to use bool instead of int, corresponding to
cheange (debugobjects: make fixup functions return bool instead of int).
Signed-off-by: Du, Changbin <changbin.du@intel.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Josh Triplett <josh@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tejun Heo <tj@kernel.org>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Du, Changbin [Fri, 20 May 2016 00:09:26 +0000 (17:09 -0700)]
workqueue: update debugobjects fixup callbacks return type
Update the return type to use bool instead of int, corresponding to
change (debugobjects: make fixup functions return bool instead of int)
Signed-off-by: Du, Changbin <changbin.du@intel.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Josh Triplett <josh@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tejun Heo <tj@kernel.org>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Du, Changbin [Fri, 20 May 2016 00:09:23 +0000 (17:09 -0700)]
debugobjects: correct the usage of fixup call results
If debug_object_fixup() return non-zero when problem has been fixed.
But the code got it backwards, it taks 0 as fixup successfully. So fix
it.
Signed-off-by: Du, Changbin <changbin.du@intel.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Josh Triplett <josh@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tejun Heo <tj@kernel.org>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Du, Changbin [Fri, 20 May 2016 00:09:20 +0000 (17:09 -0700)]
debugobjects: make fixup functions return bool instead of int
I am going to introduce debugobjects infrastructure to USB subsystem.
But before this, I found the code of debugobjects could be improved.
This patchset will make fixup functions return bool type instead of int.
Because fixup only need report success or no. boolean is the 'real'
type.
This patch (of 7):
The object debugging infrastructure core provides some fixup callbacks
for the subsystem who use it. These callbacks are called from the debug
code whenever a problem in debug_object_init is detected. And
debugobjects core suppose them returns 1 when the fixup was successful,
otherwise 0. So the return type is boolean.
A bad thing is that debug_object_fixup use the return value for
arithmetic operation. It confused me that what is the reall return
type.
Reading over the whole code, I found some place do use the return value
incorrectly(see next patch). So why use bool type instead?
Signed-off-by: Du, Changbin <changbin.du@intel.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Josh Triplett <josh@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tejun Heo <tj@kernel.org>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Vineet Gupta [Fri, 20 May 2016 00:09:17 +0000 (17:09 -0700)]
scripts/bloat-o-meter: print percent change
This adds an additional line of output (to reduce the chances of
breaking any existing output parsers) which prints the total size before
and after and the relative difference.
add/remove: 39/0 grow/shrink: 12408/55 up/down: 362227/-1430 (360797)
function old new delta
ext4_fill_super 10556 12590 +2034
_fpadd_parts - 1186 +1186
ntfs_fill_super 5340 6164 +824
...
...
__divdf3 752 386 -366
unlzma 3682 3274 -408
Total: Before=
5023101, After=
5383898, chg 7.000000%
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Link: http://lkml.kernel.org/r/1463124110-30314-1-git-send-email-vgupta@synopsys.com
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Michal Marek <mmarek@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Kees Cook [Fri, 20 May 2016 00:09:14 +0000 (17:09 -0700)]
scripts/spelling.txt: add "fimware" misspelling
A few instances of "fimware" instead of "firmware" were found. Fix
these and add it to the spelling.txt file.
Signed-off-by: Kees Cook <keescook@chromium.org>
Reported-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Konstantin Khlebnikov [Fri, 20 May 2016 00:09:11 +0000 (17:09 -0700)]
scripts/decode_stacktrace.sh: handle symbols in modules
scripts/decode_stacktrace.sh presently displays module symbols as
func+0x0ff/0x5153 [module]
Add a third argument: the pathname of a directory where the script
should look for the file module.ko so that the output appears as
func (foo/bar.c:123) module
Without the argument or if the module file isn't found the script prints
such symbols as is without decoding.
Signed-off-by: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Deepa Dinamani [Fri, 20 May 2016 00:09:08 +0000 (17:09 -0700)]
time: remove timespec_add_safe()
All references to timespec_add_safe() now use timespec64_add_safe().
The plan is to replace struct timespec references with struct timespec64
throughout the kernel as timespec is not y2038 safe.
Drop timespec_add_safe() and use timespec64_add_safe() for all
architectures.
Link: http://lkml.kernel.org/r/1461947989-21926-4-git-send-email-deepa.kernel@gmail.com
Signed-off-by: Deepa Dinamani <deepa.kernel@gmail.com>
Acked-by: John Stultz <john.stultz@linaro.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Deepa Dinamani [Fri, 20 May 2016 00:09:05 +0000 (17:09 -0700)]
fs: poll/select/recvmmsg: use timespec64 for timeout events
struct timespec is not y2038 safe. Even though timespec might be
sufficient to represent timeouts, use struct timespec64 here as the plan
is to get rid of all timespec reference in the kernel.
The patch transitions the common functions: poll_select_set_timeout()
and select_estimate_accuracy() to use timespec64. And, all the syscalls
that use these functions are transitioned in the same patch.
The restart block parameters for poll uses monotonic time. Use
timespec64 here as well to assign timeout value. This parameter in the
restart block need not change because this only holds the monotonic
timestamp at which timeout should occur. And, unsigned long data type
should be big enough for this timestamp.
The system call interfaces will be handled in a separate series.
Compat interfaces need not change as timespec64 is an alias to struct
timespec on a 64 bit system.
Link: http://lkml.kernel.org/r/1461947989-21926-3-git-send-email-deepa.kernel@gmail.com
Signed-off-by: Deepa Dinamani <deepa.kernel@gmail.com>
Acked-by: John Stultz <john.stultz@linaro.org>
Acked-by: David S. Miller <davem@davemloft.net>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Deepa Dinamani [Fri, 20 May 2016 00:09:02 +0000 (17:09 -0700)]
time: add missing implementation for timespec64_add_safe()
timespec64_add_safe() has been defined in time64.h for 64 bit systems.
But, 32 bit systems only have an extern function prototype defined.
Provide a definition for the above function.
The function will be necessary as part of y2038 changes. struct
timespec is not y2038 safe. All references to timespec will be replaced
by struct timespec64. The function is meant to be a replacement for
timespec_add_safe().
The implementation is similar to timespec_add_safe().
Link: http://lkml.kernel.org/r/1461947989-21926-2-git-send-email-deepa.kernel@gmail.com
Signed-off-by: Deepa Dinamani <deepa.kernel@gmail.com>
Acked-by: John Stultz <john.stultz@linaro.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Jan Kara [Fri, 20 May 2016 00:08:59 +0000 (17:08 -0700)]
fsnotify: avoid spurious EMFILE errors from inotify_init()
Inotify instance is destroyed when all references to it are dropped.
That not only means that the corresponding file descriptor needs to be
closed but also that all corresponding instance marks are freed (as each
mark holds a reference to the inotify instance). However marks are
freed only after SRCU period ends which can take some time and thus if
user rapidly creates and frees inotify instances, number of existing
inotify instances can exceed max_user_instances limit although from user
point of view there is always at most one existing instance. Thus
inotify_init() returns EMFILE error which is hard to justify from user
point of view. This problem is exposed by LTP inotify06 testcase on
some machines.
We fix the problem by making sure all group marks are properly freed
while destroying inotify instance. We wait for SRCU period to end in
that path anyway since we have to make sure there is no event being
added to the instance while we are tearing down the instance. So it
takes only some plumbing to allow for marks to be destroyed in that path
as well and not from a dedicated work item.
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Jan Kara <jack@suse.cz>
Reported-by: Xiaoguang Wang <wangxg.fnst@cn.fujitsu.com>
Tested-by: Xiaoguang Wang <wangxg.fnst@cn.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Linus Torvalds [Thu, 19 May 2016 01:55:19 +0000 (18:55 -0700)]
Merge tag 'trace-v4.7' of git://git./linux/kernel/git/rostedt/linux-trace
Pull tracing updates from Steven Rostedt:
"This includes two new updates for the ftrace infrastructure.
- With the changing of the code for filtering events by pid, from a
list of pids to a bitmask, we can now easily implement following
forks. With a new tracing option "event-fork" which, when set,
will have tasks with pids in set_event_pid, when they fork, to have
their child pids added to set_event_pid and the child will be
traced as well.
Note, if "event-fork" is set and a task with its pid in
set_event_pid exits, its pid will be removed from set_event_pid
- The addition of Tom Zanussi's hist triggers. This includes a very
thorough documentatino on how to use the hist triggers with events.
This introduces a quick and easy way to get histogram data from
events and their fields.
Some other cleanups and updates were added as well. Like Masami
Hiramatsu added test cases for the event trigger and hist triggers.
Also I added a speed up of filtering by using a temp buffer when
filters are set"
* tag 'trace-v4.7' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (45 commits)
tracing: Use temp buffer when filtering events
tracing: Remove TRACE_EVENT_FL_USE_CALL_FILTER logic
tracing: Remove unused function trace_current_buffer_lock_reserve()
tracing: Remove one use of trace_current_buffer_lock_reserve()
tracing: Have trace_buffer_unlock_commit() call the _regs version with NULL
tracing: Remove unused function trace_current_buffer_discard_commit()
tracing: Move trace_buffer_unlock_commit{_regs}() to local header
tracing: Fold filter_check_discard() into its only user
tracing: Make filter_check_discard() local
tracing: Move event_trigger_unlock_commit{_regs}() to local header
tracing: Don't use the address of the buffer array name in copy_from_user
tracing: Handle tracing_map_alloc_elts() error path correctly
tracing: Add check for NULL event field when creating hist field
tracing: checking for NULL instead of IS_ERR()
tracing: Do not inherit event-fork option for instances
tracing: Fix unsigned comparison to zero in hist trigger code
kselftests/ftrace: Add a test for log2 modifier of hist trigger
tracing: Add hist trigger 'log2' modifier
kselftests/ftrace: Add hist trigger testcases
kselftests/ftrace : Add event trigger testcases
...
Linus Torvalds [Thu, 19 May 2016 01:46:55 +0000 (18:46 -0700)]
Merge branch 'stable-4.7' of git://git.infradead.org/users/pcmoore/audit
Pull audit updates from Paul Moore:
"Four small audit patches for 4.7.
Two are simple cleanups around the audit thread management code, one
adds a tty field to AUDIT_LOGIN events, and the final patch makes
tty_name() usable regardless of CONFIG_TTY.
Nothing controversial, and it all passes our regression test"
* 'stable-4.7' of git://git.infradead.org/users/pcmoore/audit:
tty: provide tty_name() even without CONFIG_TTY
audit: add tty field to LOGIN event
audit: we don't need to __set_current_state(TASK_RUNNING)
audit: cleanup prune_tree_thread
Linus Torvalds [Thu, 19 May 2016 00:22:19 +0000 (17:22 -0700)]
Merge tag 'rproc-v4.7' of git://github.com/andersson/remoteproc
Pull remoteproc updates from Bjorn Andersson:
"Introduce a synchronization point between the async firmware loading
and clients requesting the remote processor to boot, as well as
support for remote processors that are not interested in the resource
table information"
* tag 'rproc-v4.7' of git://github.com/andersson/remoteproc:
remoteproc: Add additional crash reasons
remoteproc: core: Make the loaded resource table optional
remoteproc: core: Task sync during rproc_fw_boot()
Linus Torvalds [Thu, 19 May 2016 00:17:20 +0000 (17:17 -0700)]
Merge tag 'rpmsg-v4.7' of git://github.com/andersson/remoteproc
Pull rpmsg updates from Bjorn Andersson:
"Refactor rpmsg module registration to follow other subsystems; by
introduction of module_rpmsg_driver and hiding of THIS_MODULE from
clients"
* tag 'rpmsg-v4.7' of git://github.com/andersson/remoteproc:
rpmsg: use module_rpmsg_driver in existing drivers and examples
rpmsg: add helper macro module_rpmsg_driver
rpmsg: drop owner assignment from rpmsg_drivers
rpmsg: add THIS_MODULE to rpmsg_driver in rpmsg core
Linus Torvalds [Thu, 19 May 2016 00:12:23 +0000 (17:12 -0700)]
Merge branch 'for-linus' of git://git./linux/kernel/git/dtor/input
Pull input updates from Dmitry Torokhov:
"First round of updates for the input subsystem. No new drivers here,
just some driver fixes"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input:
Input: rotary-encoder - fix bare use of 'unsigned'
Input: cm109 - spin_lock in complete() cleanup
Input: cm109 - fix handling of volume and mute buttons
Input: byd - don't wipe dynamically allocated memory twice
Input: twl4030 - fix unsafe macro definition
Input: twl6040-vibra - remove mutex
Input: bcm_iproc_tsc - DT spelling s/clock-name/clock-names/
Input: bcm_iproc_tsc - use syscon to access shared registers
Input: ti_am335x_tsc - use SIMPLE_DEV_PM_OPS
Input: omap-keypad - remove set_col_gpio_val() and get_row_gpio_val()
Input: omap-keypad - drop empty PM stubs
Input: omap-keypad - remove adjusting of scan delay
Input: gpio-keys - clean up device tree binding example
Input: kbtab - stop saving struct usb_device
Input: gtco - stop saving struct usb_device
Input: aiptek - stop saving struct usb_device
Input: acecad - stop saving struct usb_device
Linus Torvalds [Thu, 19 May 2016 00:03:51 +0000 (17:03 -0700)]
Merge tag 'media/v4.7-1' of git://git./linux/kernel/git/mchehab/linux-media
Pull media updates from Mauro Carvalho Chehab:
- added support for Intersil/Techwell TW686x-based video capture cards
- v4l PCI skeleton driver moved to samples directory
- Documentation cleanups and improvements
- RC: reduced the memory footprint for IR raw events
- tpg: Export the tpg code from vivid as a module
- adv7180: Add device tree binding documentation
- lots of driver improvements and fixes
* tag 'media/v4.7-1' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media: (173 commits)
[media] exynos-gsc: avoid build warning without CONFIG_OF
[media] samples: v4l: from Documentation to samples directory
[media] dib0700: add USB ID for another STK8096-PVR ref design based card
[media] tvp5150: propagate I2C write error in .s_register callback
[media] tvp5150: return I2C write operation failure to callers
[media] em28xx: add support for Hauppauge WinTV-dualHD DVB tuner
[media] em28xx: add missing USB IDs
[media] update cx23885 and em28xx cardlists
[media] media: au0828 fix au0828_v4l2_device_register() to not unlock and free
[media] c8sectpfe: Rework firmware loading mechanism
[media] c8sectpfe: Demote print to dev_dbg
[media] c8sectpfe: Fix broken circular buffer wp management
[media] media-device: Simplify compat32 logic
[media] media: i2c: ths7303: remove redundant assignment on bt
[media] dvb-usb: hide unused functions
[media] xilinx-vipp: remove unnecessary of_node_put
[media] drivers/media/media-devnode: clear private_data before put_device()
[media] drivers/media/media-device: move debug log before _devnode_unregister()
[media] drivers/media/rc: postpone kfree(rc_dev)
[media] media/dvb-core: forward media_create_pad_links() return value
...
Linus Torvalds [Wed, 18 May 2016 23:38:59 +0000 (16:38 -0700)]
Merge tag 'scsi-misc' of git://git./linux/kernel/git/jejb/scsi
Pull SCSI updates from James Bottomley:
"First round of SCSI updates for the 4.6+ merge window.
This batch includes the usual quota of driver updates (bnx2fc, mp3sas,
hpsa, ncr5380, lpfc, hisi_sas, snic, aacraid, megaraid_sas). There's
also a multiqueue update for scsi_debug, assorted bug fixes and a few
other minor updates (refactor of scsi_sg_pools into generic code, alua
and VPD updates, and struct timeval conversions)"
* tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (138 commits)
mpt3sas: Used "synchronize_irq()"API to synchronize timed-out IO & TMs
mpt3sas: Set maximum transfer length per IO to 4MB for VDs
mpt3sas: Updating mpt3sas driver version to 13.100.00.00
mpt3sas: Fix initial Reference tag field for 4K PI drives.
mpt3sas: Handle active cable exception event
mpt3sas: Update MPI header to 2.00.42
Revert "lpfc: Delete unnecessary checks before the function call mempool_destroy"
eata_pio: missing break statement
hpsa: Fix type ZBC conditional checks
scsi_lib: Decode T10 vendor IDs
scsi_dh_alua: do not fail for unknown VPD identification
scsi_debug: use locally assigned naa
scsi_debug: uuid for lu name
scsi_debug: vpd and mode page work
scsi_debug: add multiple queue support
bfa: fix bfa_fcb_itnim_alloc() error handling
megaraid_sas: Downgrade two success messages to info
cxlflash: Fix to resolve dead-lock during EEH recovery
scsi_debug: rework resp_report_luns
scsi_debug: use pdt constants
...
Linus Torvalds [Wed, 18 May 2016 22:30:04 +0000 (15:30 -0700)]
Merge branch 'stable/for-linus-4.7' of git://git./linux/kernel/git/konrad/ibft
Pull iscsi_ibft updates from Konrad Rzeszutek Wilk:
"The pull has two features - both of them expand the SysFS entries:
- 'prefix-len' - which is subnet_mask_prefix of the iBFT header.
- 'acpi_header' dir with: 'iBFT', OEM-ID (whatever it extracts from
the iBFT header) and OEM_TABLE_ID (also whatever it extracts from
the iBFT header). This is to help NIC drivers to figure out during
bootup how to deal with BIOS created iBFT tables (like by TianoCore
UEFI implemenation)"
* 'stable/for-linus-4.7' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/ibft:
ibft: Expose iBFT acpi header via sysfs
iscsi_ibft: Add prefix-len attr and display netmask
Linus Torvalds [Wed, 18 May 2016 20:14:02 +0000 (13:14 -0700)]
Merge tag 'armsoc-drivers' of git://git./linux/kernel/git/arm/arm-soc
Pull ARM SoC driver updates from Arnd Bergmann:
"Driver updates for ARM SoCs, these contain various things that touch
the drivers/ directory but got merged through arm-soc for practical
reasons.
For the most part, this is now related to power management
controllers, which have not yet been abstracted into a separate
subsystem, and typically require some code in drivers/soc or arch/arm
to control the power domains.
Another large chunk here is a rework of the NVIDIA Tegra USB3.0
support, which was surprisingly tricky and took a long time to get
done.
Finally, reset controller handling as always gets merged through here
as well"
* tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (97 commits)
arm-ccn: Enable building as module
soc/tegra: pmc: Add generic PM domain support
usb: xhci: tegra: Add Tegra210 support
usb: xhci: Add NVIDIA Tegra XUSB controller driver
dt-bindings: usb: xhci-tegra: Add Tegra210 XUSB controller support
dt-bindings: usb: Add NVIDIA Tegra XUSB controller binding
PCI: tegra: Support per-lane PHYs
dt-bindings: pci: tegra: Update for per-lane PHYs
phy: tegra: Add Tegra210 support
phy: Add Tegra XUSB pad controller support
dt-bindings: phy: tegra-xusb-padctl: Add Tegra210 support
dt-bindings: phy: Add NVIDIA Tegra XUSB pad controller binding
phy: core: Allow children node to be overridden
clk: tegra: Add interface to enable hardware control of SATA/XUSB PLLs
drivers: firmware: psci: make two helper functions inline
soc: renesas: rcar-sysc: Add support for R-Car H3 power areas
soc: renesas: rcar-sysc: Add support for R-Car E2 power areas
soc: renesas: rcar-sysc: Add support for R-Car M2-N power areas
soc: renesas: rcar-sysc: Add support for R-Car M2-W power areas
soc: renesas: rcar-sysc: Add support for R-Car H2 power areas
...
Linus Torvalds [Wed, 18 May 2016 20:07:57 +0000 (13:07 -0700)]
Merge tag 'armsoc-defconfig' of git://git./linux/kernel/git/arm/arm-soc
Pull ARM SoC defconfig updates from Arnd Bergmann:
"As usual, a bunch of commits, mostly adding drivers and other options
to defconfigs.
We are adding three new defconfig files for the newly added 32-bit
machines (aspeed and mps2), the rest is mainly housekeeping.
The changes outside of arch/arm/config/ are for a Kconfig symbol that
got renamed"
* tag 'armsoc-defconfig' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (63 commits)
ARM: aspeed: adapt defconfigs for new CONFIG_PRINTK_TIME
ARM: u8500_defconfig: update sensor config
ARM: u8500_defconfig: remove staging from defconfig
ARM: multi_v7_defconfig: Remove unused Kconfig option MACH_UX500_DT
ARM: at91/defconfig: sama5: add CONFIG_FHANDLE
arm/configs: Add Aspeed defconfig
arm/configs/multi_v5: Add Aspeed ast2400
ARM: at91: sama5: Update defconfig
ARM: imx_v6_v7_defconfig: add CONFIG_MICREL_PHY
ARM: imx_v6_v7_defconfig: add CONFIG_I2C_GPIO
ARM: multi_v7: Enable Tegra XUSB controller in defconfig
ARM: tegra: Enable XUSB controller in defconfig
ARM: omap2plus_defconfig: Enable PWM and ir-rx51 as loadable modules
ARM: multi_v7_defconfig: add the Atmel sama5d2-compatible ADC driver
ARM: multi_v7_defconfig: add the Atmel Audio microphone interface PDMIC
ARM: multi_v7_defconfig: add Atmel ISI (Image Sensor Interface) driver
ARM: multi_v7_defconfig: add Atmel watchdog timers
ARM: multi_v7_defconfig: add HLCDC drivers as modules
ARM: at91/defconfig: add PDMIC driver to sama5_defconfig
ARM: at91/defconfig: add HLCDC driver to sama5_defconfig
...
Linus Torvalds [Wed, 18 May 2016 19:58:39 +0000 (12:58 -0700)]
Merge tag 'armsoc-dt64' of git://git./linux/kernel/git/arm/arm-soc
Pull ARM 64-bit DT updates from Arnd Bergmann:
"We continue ramping up platform support for 64-bit ARM machines, with
111 individual non-merge changesets touching 21 platforms.
The LG1312 platform is completely new and is the first ARM platform by
LG that we support in the mainline kernel. Two other SoCs got added
that are updated versions of existing SoC families, so the port mainly
consists of new dts files:
- The Hisilicon Hip06/D03 is the latest server platform from
Huawei/Hisilicon, and follows the Hip05/D02 platform.
- Rockchip RK3399 follows the 32-bit RK3288 that is popular in
low-end Chromebooks and the 64-bit RK3368 that is mainly found in
chinese Android TV boxes.
The 96Boards HiKey based on the Hisilicon Hi6220 (Kirin 620) gets a
long-awaited overhaul with a lot of devices enabled in the DT, so it
should be much more usable with a mainline kernel now. See also
https://plus.google.com/
111524780435806926688/posts/PeGb2VsNhJd
A lot of work went into enabling new device drivers on existing
machines, but we also have a couple of new commercially available
machines:
- Google Pixel C laptop based on Tegra210
- Hardkernel Odroid C2 Based on Amlogic Meson GXBB (S905)
- Geekbuying GeekBox based on Rockchip RK3368
And finally, a couple of reference or development platforms that are
not end-user platforms but are used for trying out the respective SoC
platforms:
- Amlogic Meson GXBB P200 and P201 development systems
- NXP Layerscape 1043A QDS development board
- Hisilicon Hip06 D03 server board, as mentioned above
- LG1312 Reference Design
- RK3399 Evaluation Board"
* tag 'armsoc-dt64' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (104 commits)
arm64: dts: marvell: add XOR node for Armada 3700 SoC
dt-bindings: document rockchip rk3399-evb board
arm64: dts: rockchip: add dts file for RK3399 evaluation board
arm64: dts: rockchip: add core dtsi file for RK3399 SoCs
dt-bindings: rockchip-dw-mshc: add description for rk3399
arm64: dts: marvell: Use a SoC-specific compatible for xHCI on Armada37xx
arm64: dts: marvell: Rename armada-37xx USB node
arm64: dts: marvell: Clean up armada-3720-db
Documentation: arm64: Add Hisilicon Hip06 D03 dts binding
arm64: dts: Add initial dts for Hisilicon Hip06 D03 board
arm64: dts: hip05: Add nor flash support
arm64: dts: hip05: fix its node without msi-cells
arm64: dts: r8a7795: Don't disable referenced optional clocks
arm64: dts: salvator-x: populate EXTALR
arm64: dts: r8a7795: enable PCIe on Salvator-X
arm64: dts: r8a7795: Add PCIe nodes
arm64: tegra: Add IOMMU node to GM20B on Tegra210
arm64: tegra: Add reference clock to GM20B on Tegra210
dt-bindings: Add documentation for GM20B GPU
dt-bindings: gk20a: Document iommus property
...
Linus Torvalds [Wed, 18 May 2016 19:48:46 +0000 (12:48 -0700)]
Merge tag 'armsoc-dt' of git://git./linux/kernel/git/arm/arm-soc
Pull ARM DT updates from Arnd Bergmann:
"These are all the updates to device tree files for 32-bit platforms,
which as usual makes up the bulk of the ARM SoC changes: 462 non-merge
changesets, 450 files changed, 23340 insertions, 5216 deletions.
The three platforms that are added with the "soc" branch are here as
well, and we add some related machine files:
- For Aspeed AST2400/AST2500, we get the evaluation platform and the
Tyan Palmetto POWER8 mainboard that uses the AST2400 BMC
- For Oxnas 810SE, the Western Digital "My Book World Edition" is
added as the only platform at the moment.
- For ARM MPS2, the AN385 (Cortex-M3) and AN399 (Cortex-M7) are
supported
On the ARM Realview development platform, we now support all machines
with device tree, previously only the board files were supported,
which in turn will likely be removed soon.
Qualcomm IPQ4019 is the second generation ARM based "Internet
Processor", following the IPQ806x that is used in many high-end WiFi
routers. This one integrates two ath10k wifi radios that were
previously on separate chips.
Other boards that got added for existing chips are:
Ti OMAP family:
- Amazon Kindle Fire, first generation, tablet and ebook reader
- OnRISC Baltos iR 2110 and 3220 embedded industrial PCs
- TI AM5728 IDK, TI AM3359 ICE-V2, and TI DRA722 Rev C EVM
development systems
Samsung EXYNOS platform:
- Samsung ARTIK5 evaluation board, see
https://www.artik.io/modules/overview/artik-5/
NXP i.MX platforms:
- Ka-Ro electronics TX6S-8034, TX6S-8035, TX6U-8033, TX6U-81xx,
TX6Q-1036, TX6Q-1110/-1130, TXUL-0010 and TXUL-0011 industrial
SoM modules
- Embest MarS Board i.MX6Dual DIY platform
- Boundary Devices i.MX6 Quad Plus Nitrogen6_MAX and SoloX
Nitrogen6sx embedded boards
- Technexion Pico i.MX6UL compute module
- ZII VF610 Development Board
Marvell embedded (mvebu, orion, kirkwood) platforms:
- Linksys Viper (E4200v2 / EA4500) WiFi router
- Buffalo Kurobox Pro NAS
Qualcomm Snapdragon:
- Arrow DragonBoard 600c (96boards) with APQ8064 Snapdragon 600
Rockchips platform:
- mqmaker MiQi single-board computer
Altera SoCFPGA:
- samtec VIN|ING 1000 vehicle communication interface
Allwinner Sunxi platforms:
- Dserve DSRV9703C tablet
- Difrnce DIT4350 tablet
- Colorfly E708 Q1 tablet
- Polaroid MID2809PXE04 tablet
- Olimex A20 OLinuXino LIME2 single board computer
- Xunlong Orange Pi 2, Orange Pi One, and Orange Pi PC single board
computers
Across many platforms, bug fixes went in to address warnings that dtc
now emits with 'make dtbs W=1'. Further changes for device enablement
went into Ti OMAP, bcm283x (Raspberry Pi), bcm47xx (wifi router), Ti
Davinci, Samsung EXYNOS, Marvell mvebu/kirkwood/orion, NXP i.MX/Vybrid
NXP LPC18xx, NXP LPC32xx, Renesas shmobile/r-mobile/r-car, Rockchips
rk3xxx, ST Ux500, ST STi, Atmel AT91/SAMA5, Altera SoCFPGA, Allwinner
Sunxi, Sigma Designs Tango, NVIDIA Tegra, Socionext Uniphier and ARM
Versatile Express"
* tag 'armsoc-dt' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (458 commits)
ARM: dts: tango4: Import watchdog node
ARM: dts: tango4: Update cpus node for cpufreq
ARM: dts: tango4: Update DT to match clk driver
ARM: dts: tango4: Initial thermal support
arm/dst: Add Aspeed ast2500 device tree
arm/dts: Add Aspeed ast2400 device tree
ARM: sun7i: dt: Add pll3 and pll7 clocks
ARM: dts: sunxi: Add a olinuxino-lime2-emmc
ARM: dts: at91: sama5d4: add trng node
ARM: dts: at91: sama5d3: add trng node
ARM: dts: at91: sama5d2: add trng node
ARM: dts: at91: at91sam9g45 family: reduce the trng register map size
ARM: sun4i: dt: Add pll3 and pll7 clocks
ARM: sun5i: chip: Enable the TV Encoder
ARM: sun5i: r8: Add display blocks to the DTSI
ARM: sun5i: a13: Add display and TCON clocks
ARM: dts: ux500: configure the accelerometers open drain
ARM: mx5: dts: Enable USB OTG on M53EVK
ARM: dts: imx6ul-14x14-evk: Add audio support
ARM: dts: imx6qdl: Remove unneeded unit-addresses
...