Richard Weinberger [Tue, 26 Jul 2011 00:12:46 +0000 (17:12 -0700)]
um: clean up delay functions
Both sys-i386 and sys-x86_64 support now ndelay(). The delay functions
are based on arch/x86/lib/delay.c.
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Mathias Krause [Tue, 26 Jul 2011 00:12:45 +0000 (17:12 -0700)]
um, exec: remove redundant set_fs(USER_DS)
The address limit is already set in flush_old_exec() so this
set_fs(USER_DS) is redundant.
Signed-off-by: Mathias Krause <minipli@googlemail.com>
Cc: Richard Weinberger <richard@nod.at>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Richard Weinberger [Tue, 26 Jul 2011 00:12:44 +0000 (17:12 -0700)]
um: clean up vm-flags.h
There is no need to define VM_{STACK,DATA}_DEFAULT_FLAGS as variable.
It's also useless to test for TIF_IA32 as 64bit UML has no IA32 emulation.
Signed-off-by: Richard Weinberger <richard@nod.at>
Acked-by: Randy Dunlap <rdunlap@xenotime.net>
Cc: Michal Hocko <mhocko@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Mathias Krause [Tue, 26 Jul 2011 00:12:44 +0000 (17:12 -0700)]
cris, exec: remove redundant set_fs(USER_DS)
The address limit is already set in flush_old_exec() so those calls to
set_fs(USER_DS) are redundant.
Signed-off-by: Mathias Krause <minipli@googlemail.com>
Cc: Jesper Nilsson <jesper.nilsson@axis.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
WANG Cong [Tue, 26 Jul 2011 00:12:43 +0000 (17:12 -0700)]
cris: fix some build warnings in pinmux.c
Fix some harmless warnings such as
arch/cris/arch-v32/mach-a3/pinmux.c:273: warning: ISO C90 forbids mixed declarations and code:
Signed-off-by: WANG Cong <xiyou.wangcong@gmail.com>
Cc: Mikael Starvik <starvik@axis.com>
Cc: Jesper Nilsson <jesper.nilsson@axis.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Mathias Krause [Tue, 26 Jul 2011 00:12:42 +0000 (17:12 -0700)]
m68k, exec: remove redundant set_fs(USER_DS)
The address limit is already set in flush_old_exec() so those calls to
set_fs(USER_DS) are redundant.
Signed-off-by: Mathias Krause <minipli@googlemail.com>
Cc: Greg Ungerer <gerg@uclinux.org>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Mathias Krause [Tue, 26 Jul 2011 00:12:40 +0000 (17:12 -0700)]
m32r, exec: remove redundant set_fs(USER_DS)
The address limit is already set in flush_old_exec() so this
set_fs(USER_DS) is redundant.
Signed-off-by: Mathias Krause <minipli@googlemail.com>
Cc: Hirokazu Takata <takata@linux-m32r.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Mathias Krause [Tue, 26 Jul 2011 00:12:39 +0000 (17:12 -0700)]
alpha, exec: remove redundant set_fs(USER_DS)
The address limit is already set in flush_old_exec() so this
set_fs(USER_DS) is redundant.
Signed-off-by: Mathias Krause <minipli@googlemail.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Matt Turner <mattst88@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Mathias Krause [Tue, 26 Jul 2011 00:12:38 +0000 (17:12 -0700)]
h8300, exec: remove redundant set_fs(USER_DS)
The address limit is already set in flush_old_exec() so those calls to
set_fs(USER_DS) are redundant.
Signed-off-by: Mathias Krause <minipli@googlemail.com>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Wu Fengguang [Tue, 26 Jul 2011 00:12:37 +0000 (17:12 -0700)]
writeback: account NR_WRITTEN at IO completion time
NR_WRITTEN is now accounted at block IO enqueue time, which is not very
accurate as to common understanding. This moves NR_WRITTEN accounting to
the IO completion time and makes it more consistent with BDI_WRITTEN,
which is used for bandwidth estimation.
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
Cc: Michael Rubin <mrubin@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Hugh Dickins [Tue, 26 Jul 2011 00:12:37 +0000 (17:12 -0700)]
tmpfs: simplify unuse and writepage
shmem_unuse_inode() and shmem_writepage() contain a little code to cope
with pages inserted independently into the filecache, probably by a
filesystem stacked on top of tmpfs, then fed to its ->readpage() or
->writepage().
Unionfs was indeed experimenting with working in that way three years ago,
but I find no current examples: nowadays the stacking filesystems use vfs
interfaces to the lower filesystem.
It's now illegal: remove most of that code, adding some WARN_ON_ONCEs.
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: Erez Zadok <ezk@fsl.cs.sunysb.edu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Hugh Dickins [Tue, 26 Jul 2011 00:12:36 +0000 (17:12 -0700)]
tmpfs: simplify filepage/swappage
We can now simplify shmem_getpage_gfp(): there is no longer a dilemma of
filepage passed in via shmem_readpage(), then swappage found, which must
then be copied over to it.
Although at first it's tempting to replace the **pagep arg by returning
struct page *, that makes a mess of IS_ERR_OR_NULL(page)s in all the
callers, so leave as is.
Insert BUG_ON(!PageUptodate) when we find and lock page: some of the
complication came from uninitialized pages inserted into filecache prior
to readpage; but now we're in control, and only release pagelock on
filecache once it's uptodate (if an error occurs in reading back from
swap, the page remains in swapcache, never moved to filecache).
Signed-off-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Hugh Dickins [Tue, 26 Jul 2011 00:12:35 +0000 (17:12 -0700)]
tmpfs: simplify prealloc_page
The prealloc_page handling in shmem_getpage_gfp() is unnecessarily
complicated: first simplify that before going on to filepage/swappage.
That's right, don't report ENOMEM when the preallocation fails: we may or
may not need the page. But simply report ENOMEM once we find we do need
it, instead of dropping lock, repeating allocation, unwinding on failure
etc. And leave the out label on the fast path, don't goto.
Fix something that looks like a bug but turns out not to be: set
PageSwapBacked on prealloc_page before its mem_cgroup_cache_charge(), as
the removed case was doing. That's important before adding to LRU
(determines which LRU the page goes on), and does affect which path it
takes through memcontrol.c, but in the end MEM_CGROUP_CHANGE_TYPE_ SHMEM
is handled no differently from CACHE.
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Shaohua Li <shaohua.li@intel.com>
Cc: "Zhang, Yanmin" <yanmin.zhang@intel.com>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Hugh Dickins [Tue, 26 Jul 2011 00:12:34 +0000 (17:12 -0700)]
tmpfs: remove_shmem_readpage
Remove that pernicious shmem_readpage() at last: the things we needed it
for (splice, loop, sendfile, i915 GEM) are now fully taken care of by
shmem_file_splice_read() and shmem_read_mapping_page_gfp().
This removal clears the way for a simpler shmem_getpage_gfp(), since page
is never passed in; but leave most of that cleanup until after.
sys_readahead() and sys_fadvise(POSIX_FADV_WILLNEED) will now EINVAL,
instead of unexpectedly trying to read ahead on tmpfs: if that proves to
be an issue for someone, then we can either arrange for them to return
success instead, or try to implement async readahead on tmpfs.
Signed-off-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Hugh Dickins [Tue, 26 Jul 2011 00:12:34 +0000 (17:12 -0700)]
tmpfs: pass gfp to shmem_getpage_gfp
Make shmem_getpage() a wrapper, passing mapping_gfp_mask() down to
shmem_getpage_gfp(), which in turn passes gfp down to shmem_swp_alloc().
Change shmem_read_mapping_page_gfp() to use shmem_getpage_gfp() in the
CONFIG_SHMEM case; but leave tiny !SHMEM using read_cache_page_gfp().
Add a BUG_ON() in case anyone happens to call this on a non-shmem mapping;
though we might later want to let that case route to read_cache_page_gfp().
It annoys me to have these two almost-redundant args, gfp and fault_type:
I can't find a better way; but initialize fault_type only in shmem_fault().
Note that before, read_cache_page_gfp() was allocating i915_gem's pages
with __GFP_NORETRY as intended; but the corresponding swap vector pages
got allocated without it, leaving a small possibility of OOM.
Signed-off-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Hugh Dickins [Tue, 26 Jul 2011 00:12:33 +0000 (17:12 -0700)]
tmpfs: refine shmem_file_splice_read
Tidy up shmem_file_splice_read():
Remove readahead: okay, we could implement shmem readahead on swap,
but have never done so before, swap being the slow exceptional path.
Use shmem_getpage() instead of find_or_create_page() plus ->readpage().
Remove several comments: sorry, I found them more distracting than
helpful, and this will not be the reference version of splice_read().
Signed-off-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Hugh Dickins [Tue, 26 Jul 2011 00:12:32 +0000 (17:12 -0700)]
tmpfs: clone shmem_file_splice_read()
Copy __generic_file_splice_read() and generic_file_splice_read() from
fs/splice.c to shmem_file_splice_read() in mm/shmem.c. Make
page_cache_pipe_buf_ops and spd_release_page() accessible to it.
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: Jens Axboe <jaxboe@fusionio.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Benjamin Herrenschmidt [Tue, 26 Jul 2011 00:12:32 +0000 (17:12 -0700)]
mm/futex: fix futex writes on archs with SW tracking of dirty & young
I haven't reproduced it myself but the fail scenario is that on such
machines (notably ARM and some embedded powerpc), if you manage to hit
that futex path on a writable page whose dirty bit has gone from the PTE,
you'll livelock inside the kernel from what I can tell.
It will go in a loop of trying the atomic access, failing, trying gup to
"fix it up", getting succcess from gup, go back to the atomic access,
failing again because dirty wasn't fixed etc...
So I think you essentially hang in the kernel.
The scenario is probably rare'ish because affected architecture are
embedded and tend to not swap much (if at all) so we probably rarely hit
the case where dirty is missing or young is missing, but I think Shan has
a piece of SW that can reliably reproduce it using a shared writable
mapping & fork or something like that.
On archs who use SW tracking of dirty & young, a page without dirty is
effectively mapped read-only and a page without young unaccessible in the
PTE.
Additionally, some architectures might lazily flush the TLB when relaxing
write protection (by doing only a local flush), and expect a fault to
invalidate the stale entry if it's still present on another processor.
The futex code assumes that if the "in_atomic()" access -EFAULT's, it can
"fix it up" by causing get_user_pages() which would then be equivalent to
taking the fault.
However that isn't the case. get_user_pages() will not call
handle_mm_fault() in the case where the PTE seems to have the right
permissions, regardless of the dirty and young state. It will eventually
update those bits ... in the struct page, but not in the PTE.
Additionally, it will not handle the lazy TLB flushing that can be
required by some architectures in the fault case.
Basically, gup is the wrong interface for the job. The patch provides a
more appropriate one which boils down to just calling handle_mm_fault()
since what we are trying to do is simulate a real page fault.
The futex code currently attempts to write to user memory within a
pagefault disabled section, and if that fails, tries to fix it up using
get_user_pages().
This doesn't work on archs where the dirty and young bits are maintained
by software, since they will gate access permission in the TLB, and will
not be updated by gup().
In addition, there's an expectation on some archs that a spurious write
fault triggers a local TLB flush, and that is missing from the picture as
well.
I decided that adding those "features" to gup() would be too much for this
already too complex function, and instead added a new simpler
fixup_user_fault() which is essentially a wrapper around handle_mm_fault()
which the futex code can call.
[akpm@linux-foundation.org: coding-style fixes]
[akpm@linux-foundation.org: fix some nits Darren saw, fiddle comment layout]
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reported-by: Shan Hai <haishan.bai@gmail.com>
Tested-by: Shan Hai <haishan.bai@gmail.com>
Cc: David Laight <David.Laight@ACULAB.COM>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Darren Hart <darren.hart@intel.com>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Konstantin Khlebnikov [Tue, 26 Jul 2011 00:12:31 +0000 (17:12 -0700)]
mm: remove useless rcu lock-unlock from mapping_tagged()
radix_tree_tagged() is lockless - it reads from a member of the raid-tree
root node. It does not require any protection.
Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
Cc: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Mel Gorman [Tue, 26 Jul 2011 00:12:30 +0000 (17:12 -0700)]
mm: page allocator: reconsider zones for allocation after direct reclaim
With zone_reclaim_mode enabled, it's possible for zones to be considered
full in the zonelist_cache so they are skipped in the future. If the
process enters direct reclaim, the ZLC may still consider zones to be full
even after reclaiming pages. Reconsider all zones for allocation if
direct reclaim returns successfully.
Signed-off-by: Mel Gorman <mgorman@suse.de>
Cc: Minchan Kim <minchan.kim@gmail.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Christoph Lameter <cl@linux.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Mel Gorman [Tue, 26 Jul 2011 00:12:29 +0000 (17:12 -0700)]
mm: page allocator: initialise ZLC for first zone eligible for zone_reclaim
There have been a small number of complaints about significant stalls
while copying large amounts of data on NUMA machines reported on a
distribution bugzilla. In these cases, zone_reclaim was enabled by
default due to large NUMA distances. In general, the complaints have not
been about the workload itself unless it was a file server (in which case
the recommendation was disable zone_reclaim).
The stalls are mostly due to significant amounts of time spent scanning
the preferred zone for pages to free. After a failure, it might fallback
to another node (as zonelists are often node-ordered rather than
zone-ordered) but stall quickly again when the next allocation attempt
occurs. In bad cases, each page allocated results in a full scan of the
preferred zone.
Patch 1 checks the preferred zone for recent allocation failure
which is particularly important if zone_reclaim has failed
recently. This avoids rescanning the zone in the near future and
instead falling back to another node. This may hurt node locality
in some cases but a failure to zone_reclaim is more expensive than
a remote access.
Patch 2 clears the zlc information after direct reclaim.
Otherwise, zone_reclaim can mark zones full, direct reclaim can
reclaim enough pages but the zone is still not considered for
allocation.
This was tested on a 24-thread 2-node x86_64 machine. The tests were
focused on large amounts of IO. All tests were bound to the CPUs on
node-0 to avoid disturbances due to processes being scheduled on different
nodes. The kernels tested are
3.0-rc6-vanilla Vanilla 3.0-rc6
zlcfirst Patch 1 applied
zlcreconsider Patches 1+2 applied
FS-Mark
./fs_mark -d /tmp/fsmark-10813 -D 100 -N 5000 -n 208 -L 35 -t 24 -S0 -s 524288
fsmark-3.0-rc6 3.0-rc6 3.0-rc6
vanilla zlcfirs zlcreconsider
Files/s min 54.90 ( 0.00%) 49.80 (-10.24%) 49.10 (-11.81%)
Files/s mean 100.11 ( 0.00%) 135.17 (25.94%) 146.93 (31.87%)
Files/s stddev 57.51 ( 0.00%) 138.97 (58.62%) 158.69 (63.76%)
Files/s max 361.10 ( 0.00%) 834.40 (56.72%) 802.40 (55.00%)
Overhead min 76704.00 ( 0.00%) 76501.00 ( 0.27%) 77784.00 (-1.39%)
Overhead mean
1485356.51 ( 0.00%)
1035797.83 (43.40%)
1594680.26 (-6.86%)
Overhead stddev
1848122.53 ( 0.00%) 881489.88 (109.66%)
1772354.90 ( 4.27%)
Overhead max
7989060.00 ( 0.00%)
3369118.00 (137.13%)
10135324.00 (-21.18%)
MMTests Statistics: duration
User/Sys Time Running Test (seconds) 501.49 493.91 499.93
Total Elapsed Time (seconds) 2451.57 2257.48 2215.92
MMTests Statistics: vmstat
Page Ins 46268 63840 66008
Page Outs
90821596 90671128 88043732
Swap Ins 0 0 0
Swap Outs 0 0 0
Direct pages scanned
13091697 8966863 8971790
Kswapd pages scanned 0
1830011 1831116
Kswapd pages reclaimed 0
1829068 1829930
Direct pages reclaimed
13037777 8956828 8648314
Kswapd efficiency 100% 99% 99%
Kswapd velocity 0.000 810.643 826.346
Direct efficiency 99% 99% 96%
Direct velocity 5340.128 3972.068 4048.788
Percentage direct scans 100% 83% 83%
Page writes by reclaim 0 3 0
Slabs scanned 796672 720640 720256
Direct inode steals
7422667 7160012 7088638
Kswapd inode steals 0
1736840 2021238
Test completes far faster with a large increase in the number of files
created per second. Standard deviation is high as a small number of
iterations were much higher than the mean. The number of pages scanned by
zone_reclaim is reduced and kswapd is used for more work.
LARGE DD
3.0-rc6 3.0-rc6 3.0-rc6
vanilla zlcfirst zlcreconsider
download tar 59 ( 0.00%) 59 ( 0.00%) 55 ( 7.27%)
dd source files 527 ( 0.00%) 296 (78.04%) 320 (64.69%)
delete source 36 ( 0.00%) 19 (89.47%) 20 (80.00%)
MMTests Statistics: duration
User/Sys Time Running Test (seconds) 125.03 118.98 122.01
Total Elapsed Time (seconds) 624.56 375.02 398.06
MMTests Statistics: vmstat
Page Ins
3594216 439368 407032
Page Outs
23380832 23380488 23377444
Swap Ins 0 0 0
Swap Outs 0 436 287
Direct pages scanned
17482342 69315973 82864918
Kswapd pages scanned 0 519123 575425
Kswapd pages reclaimed 0 466501 522487
Direct pages reclaimed
5858054 2732949 2712547
Kswapd efficiency 100% 89% 90%
Kswapd velocity 0.000 1384.254 1445.574
Direct efficiency 33% 3% 3%
Direct velocity 27991.453 184832.737 208171.929
Percentage direct scans 100% 99% 99%
Page writes by reclaim 0 5082 13917
Slabs scanned 17280 29952 35328
Direct inode steals 115257
1431122 332201
Kswapd inode steals 0 0 979532
This test downloads a large tarfile and copies it with dd a number of
times - similar to the most recent bug report I've dealt with. Time to
completion is reduced. The number of pages scanned directly is still
disturbingly high with a low efficiency but this is likely due to the
number of dirty pages encountered. The figures could probably be improved
with more work around how kswapd is used and how dirty pages are handled
but that is separate work and this result is significant on its own.
Streaming Mapped Writer
MMTests Statistics: duration
User/Sys Time Running Test (seconds) 124.47 111.67 112.64
Total Elapsed Time (seconds) 2138.14 1816.30 1867.56
MMTests Statistics: vmstat
Page Ins 90760 89124 89516
Page Outs
121028340 120199524 120736696
Swap Ins 0 86 55
Swap Outs 0 0 0
Direct pages scanned
114989363 96461439 96330619
Kswapd pages scanned
56430948 56965763 57075875
Kswapd pages reclaimed
27743219 27752044 27766606
Direct pages reclaimed 49777 46884 36655
Kswapd efficiency 49% 48% 48%
Kswapd velocity 26392.541 31363.631 30561.736
Direct efficiency 0% 0% 0%
Direct velocity 53780.091 53108.759 51581.004
Percentage direct scans 67% 62% 62%
Page writes by reclaim 385 122 1513
Slabs scanned 43008 39040 42112
Direct inode steals 0 10 8
Kswapd inode steals 733 534 477
This test just creates a large file mapping and writes to it linearly.
Time to completion is again reduced.
The gains are mostly down to two things. In many cases, there is less
scanning as zone_reclaim simply gives up faster due to recent failures.
The second reason is that memory is used more efficiently. Instead of
scanning the preferred zone every time, the allocator falls back to
another zone and uses it instead improving overall memory utilisation.
This patch: initialise ZLC for first zone eligible for zone_reclaim.
The zonelist cache (ZLC) is used among other things to record if
zone_reclaim() failed for a particular zone recently. The intention is to
avoid a high cost scanning extremely long zonelists or scanning within the
zone uselessly.
Currently the zonelist cache is setup only after the first zone has been
considered and zone_reclaim() has been called. The objective was to avoid
a costly setup but zone_reclaim is itself quite expensive. If it is
failing regularly such as the first eligible zone having mostly mapped
pages, the cost in scanning and allocation stalls is far higher than the
ZLC initialisation step.
This patch initialises ZLC before the first eligible zone calls
zone_reclaim(). Once initialised, it is checked whether the zone failed
zone_reclaim recently. If it has, the zone is skipped. As the first zone
is now being checked, additional care has to be taken about zones marked
full. A zone can be marked "full" because it should not have enough
unmapped pages for zone_reclaim but this is excessive as direct reclaim or
kswapd may succeed where zone_reclaim fails. Only mark zones "full" after
zone_reclaim fails if it failed to reclaim enough pages after scanning.
Signed-off-by: Mel Gorman <mgorman@suse.de>
Cc: Minchan Kim <minchan.kim@gmail.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Christoph Lameter <cl@linux.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
KAMEZAWA Hiroyuki [Tue, 26 Jul 2011 00:12:27 +0000 (17:12 -0700)]
mm: preallocate page before lock_page() at filemap COW
Currently we are keeping faulted page locked throughout whole __do_fault
call (except for page_mkwrite code path) after calling file system's fault
code. If we do early COW, we allocate a new page which has to be charged
for a memcg (mem_cgroup_newpage_charge).
This function, however, might block for unbounded amount of time if memcg
oom killer is disabled or fork-bomb is running because the only way out of
the OOM situation is either an external event or OOM-situation fix.
In the end we are keeping the faulted page locked and blocking other
processes from faulting it in which is not good at all because we are
basically punishing potentially an unrelated process for OOM condition in
a different group (I have seen stuck system because of ld-2.11.1.so being
locked).
We can do test easily.
% cgcreate -g memory:A
% cgset -r memory.limit_in_bytes=64M A
% cgset -r memory.memsw.limit_in_bytes=64M A
% cd kernel_dir; cgexec -g memory:A make -j
Then, the whole system will live-locked until you kill 'make -j'
by hands (or push reboot...) This is because some important page in a
a shared library are locked.
Considering again, the new page is not necessary to be allocated
with lock_page() held. And usual page allocation may dive into
long memory reclaim loop with holding lock_page() and can cause
very long latency.
There are 3 ways.
1. do allocation/charge before lock_page()
Pros. - simple and can handle page allocation in the same manner.
This will reduce holding time of lock_page() in general.
Cons. - we do page allocation even if ->fault() returns error.
2. do charge after unlock_page(). Even if charge fails, it's just OOM.
Pros. - no impact to non-memcg path.
Cons. - implemenation requires special cares of LRU and we need to modify
page_add_new_anon_rmap()...
3. do unlock->charge->lock again method.
Pros. - no impact to non-memcg path.
Cons. - This may kill LOCK_PAGE_RETRY optimization. We need to release
lock and get it again...
This patch moves "charge" and memory allocation for COW page
before lock_page(). Then, we can avoid scanning LRU with holding
a lock on a page and latency under lock_page() will be reduced.
Then, above livelock disappears.
[akpm@linux-foundation.org: fix code layout]
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Reported-by: Lutz Vieweg <lvml@5t9.de>
Original-idea-by: Michal Hocko <mhocko@suse.cz>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Ying Han <yinghan@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Hugh Dickins [Tue, 26 Jul 2011 00:12:26 +0000 (17:12 -0700)]
tmpfs: no need to use i_lock
2.6.36's
7e496299d4d2 ("tmpfs: make tmpfs scalable with percpu_counter for
used blocks") to make tmpfs scalable with percpu_counter used
inode->i_lock in place of sbinfo->stat_lock around i_blocks updates; but
that was adverse to scalability, and unnecessary, since info->lock is
already held there in the fast paths.
Remove those uses of i_lock, and add info->lock in the three error paths
where it's then needed across shmem_free_blocks(). It's not actually
needed across shmem_unacct_blocks(), but they're so often paired that it
looks wrong to split them apart.
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Hugh Dickins [Tue, 26 Jul 2011 00:12:25 +0000 (17:12 -0700)]
mm: pincer in truncate_inode_pages_range
truncate_inode_pages_range()'s final loop has a nice pincer property,
bringing start and end together, squeezing out the last pages. But the
range handling missed out on that, just sliding up the range, perhaps
letting pages come in behind it. Add one more test to give it the same
pincer effect.
Signed-off-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Hugh Dickins [Tue, 26 Jul 2011 00:12:25 +0000 (17:12 -0700)]
mm: consistent truncate and invalidate loops
Make the pagevec_lookup loops in truncate_inode_pages_range(),
invalidate_mapping_pages() and invalidate_inode_pages2_range() more
consistent with each other.
They were relying upon page->index of an unlocked page, but apologizing
for it: accept it, embrace it, add comments and WARN_ONs, and simplify the
index handling.
invalidate_inode_pages2_range() had special handling for a wrapped
page->index + 1 = 0 case; but MAX_LFS_FILESIZE doesn't let us anywhere
near there, and a corrupt page->index in the radix_tree could cause more
trouble than that would catch. Remove that wrapped handling.
invalidate_inode_pages2_range() uses min() to limit the pagevec_lookup
when near the end of the range: copy that into the other two, although
it's less useful than you might think (it limits the use of the buffer,
rather than the indices looked up).
Signed-off-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Hugh Dickins [Tue, 26 Jul 2011 00:12:24 +0000 (17:12 -0700)]
mm: tidy vmtruncate_range and related functions
Use consistent variable names in truncate_pagecache(), truncate_setsize(),
vmtruncate() and vmtruncate_range().
unmap_mapping_range() and vmtruncate_range() have mismatched interfaces:
don't change either, but make the vmtruncates more precise about what they
expect unmap_mapping_range() to do.
vmtruncate_range() is currently called only with page-aligned start and
end+1: can handle unaligned start, but unaligned end+1 would hit BUG_ON in
truncate_inode_pages_range() (lacks partial clearing of the end page).
Signed-off-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Hugh Dickins [Tue, 26 Jul 2011 00:12:23 +0000 (17:12 -0700)]
mm: truncate functions are in truncate.c
Correct comment on truncate_inode_pages*() in linux/mm.h; and remove
declaration of page_unuse(), it didn't exist even in 2.2.26 or 2.4.0!
Signed-off-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Hugh Dickins [Tue, 26 Jul 2011 00:12:23 +0000 (17:12 -0700)]
mm: cleanup descriptions of filler arg
The often-NULL data arg to read_cache_page() and read_mapping_page()
functions is misdescribed as "destination for read data": no, it's the
first arg to the filler function, often struct file * to ->readpage().
Satisfy checkpatch.pl on those filler prototypes, and tidy up the
declarations in linux/pagemap.h.
Signed-off-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
David S. Miller [Tue, 26 Jul 2011 00:12:22 +0000 (17:12 -0700)]
sparc64: implement get_user_pages_fast()
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
David S. Miller [Tue, 26 Jul 2011 00:12:21 +0000 (17:12 -0700)]
sparc64: add support for _PAGE_SPECIAL
Luckily there are still a few software PTE bits remaining and they even
match up in both the sun4u and sun4v pte layouts.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
David S. Miller [Tue, 26 Jul 2011 00:12:21 +0000 (17:12 -0700)]
sparc64: use RCU page table freeing
Make use of the generic RCU page table freeing on Sparc64, doing so allows
for race-free software page-table walkers like gup_fast().
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
David S. Miller [Tue, 26 Jul 2011 00:12:20 +0000 (17:12 -0700)]
sparc64: kill page table quicklists
With the recent mmu_gather changes that included generic RCU freeing of
page-tables, it is now quite straightforward to implement gup_fast() on
sparc64.
This patch:
Remove the page table quicklists. They are pointless and make it harder
to use RCU page table freeing and share code with other architectures.
BTW, this is the second time this has happened, see commit
3c936465249f
("[SPARC64]: Kill pgtable quicklists and use SLAB.")
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Dmitry Fink [Tue, 26 Jul 2011 00:12:19 +0000 (17:12 -0700)]
mmap: fix and tidy up overcommit page arithmetic
- shmem pages are not immediately available, but they are not
potentially available either, even if we swap them out, they will just
relocate from memory into swap, total amount of immediate and
potentially available memory is not going to be affected, so we
shouldn't count them as potentially free in the first place.
- nr_free_pages() is not an expensive operation anymore, there is no
need to split the decision making in two halves and repeat code.
Signed-off-by: Dmitry Fink <dmitry.fink@palm.com>
Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
Acked-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Andrew Morton [Tue, 26 Jul 2011 00:12:18 +0000 (17:12 -0700)]
mm/memblock.c: avoid abuse of RED_INACTIVE
RED_INACTIVE is a slab thing, and reusing it for memblock was
inappropriate, because memblock is dealing with phys_addr_t's which have a
Kconfigurable sizeof().
Create a new poison type for this application. Fixes the sparse warning
warning: cast truncates bits from constant value (
9f911029d74e35b becomes
9d74e35b)
Reported-by: H Hartley Sweeten <hartleys@visionengravers.com>
Tested-by: H Hartley Sweeten <hartleys@visionengravers.com>
Acked-by: Pekka Enberg <penberg@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
David Rientjes [Tue, 26 Jul 2011 00:12:18 +0000 (17:12 -0700)]
oom: make deprecated use of oom_adj more verbose
/proc/pid/oom_adj is deprecated and scheduled for removal in August 2012
according to Documentation/feature-removal-schedule.txt.
This patch makes the warning more verbose by making it appear as a more
serious problem (the presence of a stack trace and being multiline should
attract more attention) so that applications still using the old interface
can get fixed.
Very popular users of the old interface have been converted since the oom
killer rewrite has been introduced. udevd switched to the
/proc/pid/oom_score_adj interface for v162, kde switched in 4.6.1, and
opensshd switched in 5.7p1.
At the start of 2012, this should be changed into a WARN() to emit all
such incidents and then finally remove the tunable in August 2012 as
scheduled.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
David Rientjes [Tue, 26 Jul 2011 00:12:17 +0000 (17:12 -0700)]
oom: remove references to old badness() function
The badness() function in the oom killer was renamed to oom_badness() in
a63d83f427fb ("oom: badness heuristic rewrite") since it is a globally
exported function for clarity.
The prototype for the old function still existed in linux/oom.h, so remove
it. There are no existing users.
Also fixes documentation and comment references to badness() and adjusts
them accordingly.
Signed-off-by: David Rientjes <rientjes@google.com>
Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Andrew Morton [Tue, 26 Jul 2011 00:12:16 +0000 (17:12 -0700)]
mm/memory.c: remove ZAP_BLOCK_SIZE
ZAP_BLOCK_SIZE became unused in the preemptible-mmu_gather work ("mm:
Remove i_mmap_lock lockbreak"). So zap it.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Chris Forbes [Tue, 26 Jul 2011 00:12:14 +0000 (17:12 -0700)]
mm: hugetlb: fix coding style issues
Fix coding style issues flagged by checkpatch.pl
Signed-off-by: Chris Forbes <chrisf@ijw.co.nz>
Acked-by: Eric B Munson <emunson@mgebm.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Chris Wright [Tue, 26 Jul 2011 00:12:14 +0000 (17:12 -0700)]
mm/huge_memory.c: minor lock simplification in __khugepaged_exit
The lock is released first thing in all three branches. Simplify this by
unconditionally releasing lock and remove else clause which was only there
to be sure lock was released.
Signed-off-by: Chris Wright <chrisw@sous-sol.org>
Reviewed-by: Michal Hocko <mhocko@suse.cz>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Acked-by: Johannes Weiner <jweiner@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Daniel Kiper [Tue, 26 Jul 2011 00:12:13 +0000 (17:12 -0700)]
mm/page_cgroup.c: simplify code by using SECTION_ALIGN_UP() and SECTION_ALIGN_DOWN() macros
Commit
a539f3533b78e3 ("mm: add SECTION_ALIGN_UP() and
SECTION_ALIGN_DOWN() macro") introduced the SECTION_ALIGN_UP() and
SECTION_ALIGN_DOWN() macros. Use those macros to increase code
readability.
Signed-off-by: Daniel Kiper <dkiper@net-space.pl>
Acked-by: David Rientjes <rientjes@google.com>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
WANG Cong [Tue, 26 Jul 2011 00:12:12 +0000 (17:12 -0700)]
mm: remove the leftovers of noswapaccount
In commit
a2c8990aed5ab ("memsw: remove noswapaccount kernel parameter"),
Michal forgot to remove some left pieces of noswapaccount in the tree,
this patch removes them all.
Signed-off-by: WANG Cong <xiyou.wangcong@gmail.com>
Acked-by: Michal Hocko <mhocko@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
KOSAKI Motohiro [Tue, 26 Jul 2011 00:12:11 +0000 (17:12 -0700)]
pagewalk: fix code comment for THP
Commit
bae9c19bf1 ("thp: split_huge_page_mm/vma") changed locking behavior
of walk_page_range(). Thus this patch changes the comment too.
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Hiroyuki Kamezawa <kamezawa.hiroyuki@gmail.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Matt Mackall <mpm@selenic.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
KOSAKI Motohiro [Tue, 26 Jul 2011 00:12:10 +0000 (17:12 -0700)]
pagewalk: add locking-rule comments
Originally, walk_hugetlb_range() didn't require a caller take any lock.
But commit
d33b9f45bd ("mm: hugetlb: fix hugepage memory leak in
walk_page_range") changed its rule. Because it added find_vma() call in
walk_hugetlb_range().
Any locking-rule change commit should write a doc too.
[akpm@linux-foundation.org: clarify comment]
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Hiroyuki Kamezawa <kamezawa.hiroyuki@gmail.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Matt Mackall <mpm@selenic.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
KOSAKI Motohiro [Tue, 26 Jul 2011 00:12:09 +0000 (17:12 -0700)]
pagewalk: don't look up vma if walk->hugetlb_entry is unused
Currently, walk_page_range() calls find_vma() every page table for walk
iteration. but it's completely unnecessary if walk->hugetlb_entry is
unused. And we don't have to assume find_vma() is a lightweight
operation. So this patch checks the walk->hugetlb_entry and avoids the
find_vma() call if possible.
This patch also makes some cleanups. 1) remove ugly uninitialized_var()
and 2) #ifdef in function body.
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Hiroyuki Kamezawa <kamezawa.hiroyuki@gmail.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Matt Mackall <mpm@selenic.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
KOSAKI Motohiro [Tue, 26 Jul 2011 00:12:09 +0000 (17:12 -0700)]
pagewalk: fix walk_page_range() don't check find_vma() result properly
The doc of find_vma() says,
/* Look up the first VMA which satisfies addr < vm_end, NULL if none. */
struct vm_area_struct *find_vma(struct mm_struct *mm, unsigned long addr)
{
(snip)
Thus, caller should confirm whether the returned vma matches a desired one.
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Hiroyuki Kamezawa <kamezawa.hiroyuki@gmail.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Matt Mackall <mpm@selenic.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
KOSAKI Motohiro [Tue, 26 Jul 2011 00:12:08 +0000 (17:12 -0700)]
mm: swap-token: add a comment for priority aging
Document some swap token aging design decisions.
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Acked-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
KOSAKI Motohiro [Tue, 26 Jul 2011 00:12:07 +0000 (17:12 -0700)]
mm: swap-token: makes global variables to function local
global_faults and last_aging are only used in grab_swap_token(). Move
them into grab_swap_token().
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Acked-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
KOSAKI Motohiro [Tue, 26 Jul 2011 00:12:06 +0000 (17:12 -0700)]
mm: swap-token: fix dead link
http://www.cs.wm.edu/~sjiang/token.pdf is now dead. Replace it with an
alive alternative.
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Acked-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Daniel Kiper [Tue, 26 Jul 2011 00:12:06 +0000 (17:12 -0700)]
xen/balloon: memory hotplug support for Xen balloon driver
Memory hotplug support for Xen balloon driver. It should be mentioned
that hotplugged memory is not onlined automatically. It should be onlined
by user through standard sysfs interface.
Memory could be hotplugged in following steps:
1) dom0: xl mem-max <domU> <maxmem>
where <maxmem> is >= requested memory size,
2) dom0: xl mem-set <domU> <memory>
where <memory> is requested memory size; alternatively memory
could be added by writing proper value to
/sys/devices/system/xen_memory/xen_memory0/target or
/sys/devices/system/xen_memory/xen_memory0/target_kb on dumU,
3) domU: for i in /sys/devices/system/memory/memory*/state; do \
[ "`cat "$i"`" = offline ] && echo online > "$i"; done
Memory could be onlined automatically on domU by adding following line to
udev rules:
SUBSYSTEM=="memory", ACTION=="add", RUN+="/bin/sh -c '[ -f /sys$devpath/state ] && echo online > /sys$devpath/state'"
In that case step 3 should be omitted.
Signed-off-by: Daniel Kiper <dkiper@net-space.pl>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Daniel Kiper [Tue, 26 Jul 2011 00:12:05 +0000 (17:12 -0700)]
mm: extend memory hotplug API to allow memory hotplug in virtual machines
This patch contains online_page_callback and apropriate functions for
registering/unregistering online page callbacks. It allows to do some
machine specific tasks during online page stage which is required to
implement memory hotplug in virtual machines. Currently this patch is
required by latest memory hotplug support for Xen balloon driver patch
which will be posted soon.
Additionally, originial online_page() function was splited into
following functions doing "atomic" operations:
- __online_page_set_limits() - set new limits for memory management code,
- __online_page_increment_counters() - increment totalram_pages and totalhigh_pages,
- __online_page_free() - free page to allocator.
It was done to:
- not duplicate existing code,
- ease hotplug code devolpment by usage of well defined interface,
- avoid stupid bugs which are unavoidable when the same code
(by design) is developed in many places.
[akpm@linux-foundation.org: use explicit indirect-call syntax]
Signed-off-by: Daniel Kiper <dkiper@net-space.pl>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Axel Lin [Tue, 26 Jul 2011 00:12:01 +0000 (17:12 -0700)]
backlight: set backlight type and max_brightness before backlights are registered
Since commit
a19a6ee "backlight: Allow properties to be passed at
registration" and commit
bb7ca74 "backlight: add backlight type", we can
set backlight type and max_brightness before backlights are registered.
Some newly added drivers did not set it properly, let's fix it.
Signed-off-by: Axel Lin <axel.lin@gmail.com>
Cc: Matthew Garrett <mjg@redhat.com>
Cc: Jingoo Han <jg1.han@samsung.com>
Cc: Donghwa Lee <dh09.lee@samsung.com>
Cc: InKi Dae <inki.dae@samsung.com>
Cc: Richard Purdie <rpurdie@rpsys.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Jingoo Han [Tue, 26 Jul 2011 00:12:00 +0000 (17:12 -0700)]
backlight: add ams369fg06 amoled driver
Add the ams369fg06 amoled panel driver. The ams369fg06 amoled panel (480
x 800) driver uses 3-wired SPI inteface. The brightness can be controlled
by gamma setting of amoled panel.
[sfr@canb.auug.org.au: fix build error]
[axel.lin@gmail.com: unregister backlight device when unloading the module]
[axel.lin@gmail.com: staticize ams369fg06_shutdown]
Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Cc: Richard Purdie <rpurdie@rpsys.net>
Cc: Inki Dae <inki.dae@samsung.com>
Cc: anish singh <anish198519851985@gmail.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Axel Lin <axel.lin@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Axel Lin [Tue, 26 Jul 2011 00:11:59 +0000 (17:11 -0700)]
drivers/video/backlight/adp8860_bl.c: remove a redundant assignment for max_brightness
We have set props.max_brightness before registering backlight device.
Signed-off-by: Axel Lin <axel.lin@gmail.com>
Acked-by: Mike Frysinger <vapier@gentoo.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Axel Lin [Tue, 26 Jul 2011 00:11:58 +0000 (17:11 -0700)]
drivers/video/backlight/ld9040.c: small fixes
- Fix checking of wrong return value for backlight_device_register()
- Properly free allocated resources in ld9040_probe() error path and
ld9040_remove().
Signed-off-by: Axel Lin <axel.lin@gmail.com>
Cc: Donghwa Lee <dh09.lee@samsung.com>
Cc: Richard Purdie <rpurdie@rpsys.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Peter Zijlstra [Tue, 26 Jul 2011 00:11:57 +0000 (17:11 -0700)]
mm/backing-dev.c: reset bdi min_ratio in bdi_unregister()
Vito said:
: The system has many usb disks coming and going day to day, with their
: respective bdi's having min_ratio set to 1 when inserted. It works for
: some time until eventually min_ratio can no longer be set, even when the
: active set of bdi's seen in /sys/class/bdi/*/min_ratio doesn't add up to
: anywhere near 100.
:
: This then leads to an unrelated starvation problem caused by write-heavy
: fuse mounts being used atop the usb disks, a problem the min_ratio setting
: at the underlying devices bdi effectively prevents.
Fix this leakage by resetting the bdi min_ratio when unregistering the
BDI.
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Reported-by: Vito Caputo <lkml@pengaru.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Miklos Szeredi <miklos@szeredi.hu>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Alexander Stein [Tue, 26 Jul 2011 00:11:54 +0000 (17:11 -0700)]
drivers/misc/pch_phub.c: don't oops if dmi_get_system_info returns NULL
If dmi_get_system_info() returns NULL, pch_phub_probe() will dereferencea
a zero pointer.
This oops was observed on an Atom based board which has no BIOS, but a
bootloder which doesn't privde DMI data.
Signed-off-by: Alexander Stein <alexander.stein@systec-electronic.com>
Cc: Tomoya MORINAGA <tomoya-linux@dsn.okisemi.com>
Cc: Greg KH <gregkh@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
WANG Cong [Tue, 26 Jul 2011 00:11:54 +0000 (17:11 -0700)]
xtensa: fix a build error in arch/xtensa/include/asm/uaccess.h
Fix the following build error:
arch/xtensa/include/asm/uaccess.h:403: error: implicit declaration of function 'prefetch'
arch/xtensa/include/asm/uaccess.h:412: error: implicit declaration of function 'prefetchw'
Signed-off-by: WANG Cong <xiyou.wangcong@gmail.com>
Cc: Chris Zankel <chris@zankel.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Dan Rosenberg [Tue, 26 Jul 2011 00:11:53 +0000 (17:11 -0700)]
xtensa: prevent arbitrary read in ptrace
Prevent an arbitrary kernel read. Check the user pointer with access_ok()
before copying data in.
[akpm@linux-foundation.org: s/EIO/EFAULT/]
Signed-off-by: Dan Rosenberg <drosenberg@vsecurity.com>
Cc: Christian Zankel <chris@zankel.net>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Ian Campbell [Tue, 26 Jul 2011 00:11:52 +0000 (17:11 -0700)]
mm: use const struct page for r/o page-flag accessor methods
In a subsquent patch I have a const struct page in my hand...
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Michel Lespinasse <walken@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Ian Campbell [Tue, 26 Jul 2011 00:11:51 +0000 (17:11 -0700)]
mm: make some struct page's const
These uses are read-only and in a subsequent patch I have a const struct
page in my hand...
[akpm@linux-foundation.org: fix warnings in lowmem_page_address()]
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Michel Lespinasse <walken@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Becky Bruce [Tue, 26 Jul 2011 00:11:50 +0000 (17:11 -0700)]
hugetlb: add phys addr to struct huge_bootmem_page
This is needed on HIGHMEM systems - we don't always have a virtual
address so store the physical address and map it in as needed.
[akpm@linux-foundation.org: cleanup]
Signed-off-by: Becky Bruce <beckyb@kernel.crashing.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Becky Bruce [Tue, 26 Jul 2011 00:11:49 +0000 (17:11 -0700)]
fs/hugetlbfs/inode.c: fix pgoff alignment checking on 32-bit
This:
vma->vm_pgoff & ~(huge_page_mask(h) >> PAGE_SHIFT)
is incorrect on 32-bit. It causes us to & the pgoff with something that
looks like this (for a 4m hugepage): 0xfff003ff. The mask should be
flipped and *then* shifted, to give you 0x0000_03fff.
Signed-off-by: Becky Bruce <beckyb@kernel.crashing.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Manfred Spraul [Tue, 26 Jul 2011 00:11:47 +0000 (17:11 -0700)]
ipc/sem.c: fix race with concurrent semtimedop() timeouts and IPC_RMID
If a semaphore array is removed and in parallel a sleeping task is woken
up (signal or timeout, does not matter), then the woken up task does not
wait until wake_up_sem_queue_do() is completed. This will cause crashes,
because wake_up_sem_queue_do() will read from a stale pointer.
The fix is simple: Regardless of anything, always call get_queue_result().
This function waits until wake_up_sem_queue_do() has finished it's task.
Addresses https://bugzilla.kernel.org/show_bug.cgi?id=27142
Reported-by: Yuriy Yevtukhov <yuriy@ucoz.com>
Reported-by: Harald Laabs <kernel@dasr.de>
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Acked-by: Eric Dumazet <eric.dumazet@gmail.com>
Cc: <stable@kernel.org> [2.6.35+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Linus Torvalds [Mon, 25 Jul 2011 03:56:18 +0000 (20:56 -0700)]
Merge branch 'next' of git://git./linux/kernel/git/davej/cpufreq
* 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/davej/cpufreq:
[CPUFREQ] s5pv210: make needlessly global symbols static
[CPUFREQ] exynos4210: make needlessly global symbols static
[CPUFREQ] S3C6410: Add some lower frequencies for 800MHz base clock operation
[CPUFREQ] S5PV210: Add reboot notifier to prevent system hang
[CPUFREQ] S5PV210: Adjust udelay prior to voltage scaling down
[CPUFREQ] S5PV210: Lock a mutex while changing the cpu frequency
[CPUFREQ] S5PV210: Add pm_notifier to prevent system unstable
[CPUFREQ] S5PV210: Add arm/int voltage control support
[CPUFREQ] S5PV210: Add additional symantics for "relation" in cpufreq with pm
[CPUFREQ] S3C64xx: Notify transition complete as soon as frequency changed
[CPUFREQ] S3C6410: Support 800MHz operation in cpufreq
[CPUFREQ] s5pv210-cpufreq.c: Add missing clk_put
[CPUFREQ] Move compile for S3C64XX cpufreq to /drivers/cpufreq
[CPUFREQ] Remove some vi noise that escaped into the Makefile.
[CPUFREQ] Move ARM Samsung cpufreq drivers to drivers/cpufreq/
[CPUFREQ/S3C64xx] Move S3C64xx CPUfreq driver into drivers/cpufreq
[CPUFREQ] Handle CPUs with different capabilities in acpi-cpufreq
Linus Torvalds [Mon, 25 Jul 2011 03:55:48 +0000 (20:55 -0700)]
Merge git://git./linux/kernel/git/davem/net
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (145 commits)
bnx2x: use pci_pcie_cap()
bnx2x: fix bnx2x_stop_on_error flow in bnx2x_sp_rtnl_task
bnx2x: enable internal target-read for 57712 and up only
bnx2x: count statistic ramrods on EQ to prevent MC assert
bnx2x: fix loopback for non 10G link
bnx2x: dcb - send all unmapped priorities to same COS as L2
iwlwifi: Fix build with CONFIG_PM disabled.
gre: fix improper error handling
ipv4: use RT_TOS after some rt_tos conversions
via-velocity: remove duplicated #include
qlge: remove duplicated #include
igb: remove duplicated #include
can: c_can: remove duplicated #include
bnad: remove duplicated #include
net: allow netif_carrier to be called safely from IRQ
bna: Header File Consolidation
bna: HW Error Counter Fix
bna: Add HW Semaphore Unlock Logic
bna: IOC Event Name Change
bna: Mboxq Flush When IOC Disabled
...
Stephen Rothwell [Mon, 25 Jul 2011 01:13:13 +0000 (11:13 +1000)]
gma500: udlay(20000) is too large
So use mdelay(20) instead. Fixes this build error:
ERROR: "__bad_udelay" [drivers/staging/gma500/psb_gfx.ko] undefined!
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Linus Torvalds [Sun, 24 Jul 2011 21:34:01 +0000 (14:34 -0700)]
Merge branch 'upstream-linus' of git://git./linux/kernel/git/jgarzik/libata-dev
* 'upstream-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jgarzik/libata-dev:
ata: PATA_ARASAN_CF depends on DMADEVICES
ata: remove unnecessary code
[libata] Prevent warning during PMP error recovery
ahci: RAID-mode SATA patch for Intel Panther Point DeviceIDs
pata_it821x: Fix RAID type display, by adding missing comma
sata_dwc_460ex: fix error path
ahci: Enable SB600 64bit DMA on Asus M3A
libata: report link resume failure as KERN_WARNING instead of ERR
ahci: move ahci_sb600_softreset to libahci.c and rename it
libata: leave port thawed after reset failure
ata: sata_via: Use dev_dbg
ata: Add and use ata_print_version_once
ata: Convert ata_<foo>_printk(KERN_<LEVEL> to ata_<foo>_<level>
ata: Convert dev_printk(KERN_<LEVEL> to dev_<level>(
Vladislav Zolotarov [Sun, 24 Jul 2011 03:58:38 +0000 (03:58 +0000)]
bnx2x: use pci_pcie_cap()
Signed-off-by: Dmitry Kravkov <dmitry@broadcom.com>
Signed-off-by: Vladislav Zolotarov <vladz@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladislav Zolotarov [Sun, 24 Jul 2011 03:57:46 +0000 (03:57 +0000)]
bnx2x: fix bnx2x_stop_on_error flow in bnx2x_sp_rtnl_task
Signed-off-by: Dmitry Kravkov <dmitry@broadcom.com>
Signed-off-by: Vladislav Zolotarov <vladz@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Shmulik Ravid [Sun, 24 Jul 2011 03:57:04 +0000 (03:57 +0000)]
bnx2x: enable internal target-read for 57712 and up only
Signed-off-by: Dmitry Kravkov <dmitry@broadcom.com>
Signed-off-by: Shmulik Ravid <shmulikr@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladislav Zolotarov [Sun, 24 Jul 2011 03:54:17 +0000 (03:54 +0000)]
bnx2x: count statistic ramrods on EQ to prevent MC assert
This patch includes:
- Counting statistics ramrods as EQ ramrods the way they should be. This
accounting is meant to prevent MC asserts in case of software bugs.
- Fixes in debug facilities which were added while working on one of such
bugs.
Signed-off-by: Dmitry Kravkov <dmitry@broadcom.com>
Signed-off-by: Vladislav Zolotarov <vladz@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Yaniv Rosner [Sun, 24 Jul 2011 03:53:21 +0000 (03:53 +0000)]
bnx2x: fix loopback for non 10G link
Also fixes minor formatting in that function.
Signed-off-by: Dmitry Kravkov <dmitry@broadcom.com>
Signed-off-by: Yaniv Rosner <yanivr@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Dmitry Kravkov [Sun, 24 Jul 2011 04:09:43 +0000 (04:09 +0000)]
bnx2x: dcb - send all unmapped priorities to same COS as L2
As a result of DCBX negotiation some priorities maybe untouched and still
unmapped to any COS; instead of sending them to COS0 we assign them
to the same COS as L2 traffic - to avoid collisions with storage class of
service.
Signed-off-by: Dmitry Kravkov <dmitry@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sun, 24 Jul 2011 20:09:32 +0000 (13:09 -0700)]
iwlwifi: Fix build with CONFIG_PM disabled.
Signed-off-by: David S. Miller <davem@davemloft.net>
Linus Torvalds [Sun, 24 Jul 2011 17:20:54 +0000 (10:20 -0700)]
Merge branch 'for-linus' of /home/rmk/linux-2.6-arm
* 'for-linus' of master.kernel.org:/home/rmk/linux-2.6-arm: (237 commits)
ARM: 7004/1: fix traps.h compile warnings
ARM: 6998/2: kernel: use proper memory barriers for bitops
ARM: 6997/1: ep93xx: increase NR_BANKS to 16 for support of 128MB RAM
ARM: Fix build errors caused by adding generic macros
ARM: CPU hotplug: ensure we migrate all IRQs off a downed CPU
ARM: CPU hotplug: pass in proper affinity mask on IRQ migration
ARM: GIC: avoid routing interrupts to offline CPUs
ARM: CPU hotplug: fix abuse of irqdesc->node
ARM: 6981/2: mmci: adjust calculation of f_min
ARM: 7000/1: LPAE: Use long long printk format for displaying the pud
ARM: 6999/1: head, zImage: Always Enter the kernel in ARM state
ARM: btc: avoid invalidating the branch target cache on kernel TLB maintanence
ARM: ARM_DMA_ZONE_SIZE is no more
ARM: mach-shark: move ARM_DMA_ZONE_SIZE to mdesc->dma_zone_size
ARM: mach-sa1100: move ARM_DMA_ZONE_SIZE to mdesc->dma_zone_size
ARM: mach-realview: move from ARM_DMA_ZONE_SIZE to mdesc->dma_zone_size
ARM: mach-pxa: move from ARM_DMA_ZONE_SIZE to mdesc->dma_zone_size
ARM: mach-ixp4xx: move from ARM_DMA_ZONE_SIZE to mdesc->dma_zone_size
ARM: mach-h720x: move from ARM_DMA_ZONE_SIZE to mdesc->dma_zone_size
ARM: mach-davinci: move from ARM_DMA_ZONE_SIZE to mdesc->dma_zone_size
...
Sasha Levin [Sun, 24 Jul 2011 08:23:20 +0000 (11:23 +0300)]
Documentation: Update augmented rbtree documentation
Current documentation referred to the old method of handling augmented
trees. Update documentation to correspond with the changes done in
commit
b945d6b2554d ("rbtree: Undo augmented trees performance damage
and regression").
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: David Woodhouse <David.Woodhouse@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Ingo Molnar <mingo@elte.hu>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Lasse Collin [Sun, 24 Jul 2011 16:54:25 +0000 (19:54 +0300)]
XZ: Fix missing <linux/kernel.h> include
<linux/kernel.h> is needed for min_t. The old version
happened to work on x86 because <asm/unaligned.h>
indirectly includes <linux/kernel.h>, but it didn't
work on ARM.
<linux/kernel.h> includes <asm/byteorder.h> so it's
not necessary to include it explicitly anymore.
Signed-off-by: Lasse Collin <lasse.collin@tukaani.org>
Cc: stable <stable@kernel.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Linus Torvalds [Sun, 24 Jul 2011 16:55:45 +0000 (09:55 -0700)]
Merge branch 'for-linus' of git://git390.marist.edu/linux-2.6
* 'for-linus' of git://git390.marist.edu/pub/scm/linux-2.6: (21 commits)
[S390] use siginfo for sigtrap signals
[S390] dasd: add enhanced DASD statistics interface
[S390] kvm: make sigp emerg smp capable
[S390] disable cpu measurement alerts on a dying cpu
[S390] initial cr0 bits
[S390] iucv cr0 enablement bit
[S390] race safe external interrupt registration
[S390] remove tape block docu
[S390] ap: toleration support for ap device type 10
[S390] cleanup program check handler prototypes
[S390] remove kvm mmu reload on s390
[S390] Use gmap translation for accessing guest memory
[S390] use gmap address spaces for kvm guest images
[S390] kvm guest address space mapping
[S390] fix s390 assembler code alignments
[S390] move sie code to entry.S
[S390] kvm: handle tprot intercepts
[S390] qdio: clear shared DSCI before scheduling the queue handler
[S390] reference bit testing for unmapped pages
[S390] irqs: Do not trace arch_local_{*,irq_*} functions
...
Linus Torvalds [Sun, 24 Jul 2011 16:55:18 +0000 (09:55 -0700)]
Merge branch 'for-upstream' of git://openrisc.net/jonas/linux
* 'for-upstream' of git://openrisc.net/jonas/linux: (24 commits)
OpenRISC: Add MAINTAINERS entry
OpenRISC: Miscellaneous
OpenRISC: Library routines
OpenRISC: Headers
OpenRISC: Traps
OpenRISC: Module support
OpenRISC: GPIO
OpenRISC: Scheduling/Process management
OpenRISC: Idle/Power management
OpenRISC: System calls
OpenRISC: IRQ
OpenRISC: Timekeeping
OpenRISC: DMA
OpenRISC: PTrace
OpenRISC: Build infrastructure
OpenRISC: Signal handling
OpenRISC: Memory management
OpenRISC: Device tree
OpenRISC: Boot code
iomap: make IOPORT/PCI mapping functions conditional
...
Linus Torvalds [Sun, 24 Jul 2011 16:54:54 +0000 (09:54 -0700)]
Merge git://git./linux/kernel/git/rusty/linux-2.6-for-linus
* git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux-2.6-for-linus:
modpost: Fix modpost's license checking V3
module: add /sys/module/<name>/uevent files
module: change attr callbacks to take struct module_kobject
modules: make arch's use default loader hooks
modules: add default loader hook implementations
param: fix return value handling in param_set_*
Linus Torvalds [Sun, 24 Jul 2011 16:07:03 +0000 (09:07 -0700)]
Merge branch 'kvm-updates/3.1' of git://git./virt/kvm/kvm
* 'kvm-updates/3.1' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (143 commits)
KVM: IOMMU: Disable device assignment without interrupt remapping
KVM: MMU: trace mmio page fault
KVM: MMU: mmio page fault support
KVM: MMU: reorganize struct kvm_shadow_walk_iterator
KVM: MMU: lockless walking shadow page table
KVM: MMU: do not need atomicly to set/clear spte
KVM: MMU: introduce the rules to modify shadow page table
KVM: MMU: abstract some functions to handle fault pfn
KVM: MMU: filter out the mmio pfn from the fault pfn
KVM: MMU: remove bypass_guest_pf
KVM: MMU: split kvm_mmu_free_page
KVM: MMU: count used shadow pages on prepareing path
KVM: MMU: rename 'pt_write' to 'emulate'
KVM: MMU: cleanup for FNAME(fetch)
KVM: MMU: optimize to handle dirty bit
KVM: MMU: cache mmio info on page fault path
KVM: x86: introduce vcpu_mmio_gva_to_gpa to cleanup the code
KVM: MMU: do not update slot bitmap if spte is nonpresent
KVM: MMU: fix walking shadow page table
KVM guest: KVM Steal time registration
...
Linus Torvalds [Sun, 24 Jul 2011 16:06:47 +0000 (09:06 -0700)]
Merge branch 'upstream/xen-tracing2' of git://git./linux/kernel/git/jeremy/xen
* 'upstream/xen-tracing2' of git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen:
xen/trace: use class for multicall trace
xen/trace: convert mmu events to use DECLARE_EVENT_CLASS()/DEFINE_EVENT()
xen/multicall: move *idx fields to start of mc_buffer
xen/multicall: special-case singleton hypercalls
xen/multicalls: add unlikely around slowpath in __xen_mc_entry()
xen/multicalls: disable MC_DEBUG
xen/mmu: tune pgtable alloc/release
xen/mmu: use extend_args for more mmuext updates
xen/trace: add tlb flush tracepoints
xen/trace: add segment desc tracing
xen/trace: add xen_pgd_(un)pin tracepoints
xen/trace: add ptpage alloc/release tracepoints
xen/trace: add mmu tracepoints
xen/trace: add multicall tracing
xen/trace: set up tracepoint skeleton
xen/multicalls: remove debugfs stats
trace/xen: add skeleton for Xen trace events
Linus Torvalds [Sun, 24 Jul 2011 16:05:32 +0000 (09:05 -0700)]
Merge git://git./linux/kernel/git/herbert/crypto-2.6
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (34 commits)
crypto: caam - ablkcipher support
crypto: caam - faster aead implementation
crypto: caam - structure renaming
crypto: caam - shorter names
crypto: talitos - don't bad_key in ablkcipher setkey
crypto: talitos - remove unused giv from ablkcipher methods
crypto: talitos - don't set done notification in hot path
crypto: talitos - ensure request ordering within a single tfm
crypto: gf128mul - fix call to memset()
crypto: s390 - support hardware accelerated SHA-224
crypto: algif_hash - Handle initial af_alg_make_sg error correctly
crypto: sha1_generic - use SHA1_BLOCK_SIZE
hwrng: ppc4xx - add support for ppc4xx TRNG
crypto: crypto4xx - Perform read/modify/write on device control register
crypto: caam - fix build warning when DEBUG_FS not configured
crypto: arc4 - Fixed coding style issues
crypto: crc32c - Fixed coding style issue
crypto: omap-sham - do not schedule tasklet if there is no active requests
crypto: omap-sham - clear device flags when finishing request
crypto: omap-sham - irq handler must not clear error code
...
Alessio Igor Bogani [Thu, 14 Jul 2011 06:51:16 +0000 (08:51 +0200)]
modpost: Fix modpost's license checking V3
The commit
f02e8a6 sorts symbols placing each of them in its own elf section.
The sorting and merging into the canonical sections are done by the linker.
Unfortunately modpost to generate Module.symvers file parses vmlinux
(already linked) and all modules object files (which aren't linked yet).
These aren't sanitized by the linker yet. That breaks modpost that can't
detect license properly for modules. This patch makes modpost aware of
the new exported symbols structure.
Thanks to Arnaud Lacombe <lacombar@gmail.com> and Anders Kaseorg
<andersk@ksplice.com> for providing useful suggestions about code.
This work was supported by a hardware donation from the CE Linux Forum.
Reported-by: Jan Beulich <jbeulich@novell.com>
Signed-off-by: Alessio Igor Bogani <abogani@kernel.org>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Kay Sievers [Sun, 24 Jul 2011 12:36:04 +0000 (22:06 +0930)]
module: add /sys/module/<name>/uevent files
Userspace wants to manage module parameters with udev rules.
This currently only works for loaded modules, but not for
built-in ones.
To allow access to the built-in modules we need to
re-trigger all module load events that happened before any
userspace was running. We already do the same thing for all
devices, subsystems(buses) and drivers.
This adds the currently missing /sys/module/<name>/uevent files
to all module entries.
Signed-off-by: Kay Sievers <kay.sievers@vrfy.org>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> (split & trivial fix)
Kay Sievers [Sun, 24 Jul 2011 12:36:04 +0000 (22:06 +0930)]
module: change attr callbacks to take struct module_kobject
This simplifies the next patch, where we have an attribute on a
builtin module (ie. module == NULL).
Signed-off-by: Kay Sievers <kay.sievers@vrfy.org>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> (split into 2)
Jonas Bonn [Thu, 30 Jun 2011 19:22:12 +0000 (21:22 +0200)]
modules: make arch's use default loader hooks
This patch removes all the module loader hook implementations in the
architecture specific code where the functionality is the same as that
now provided by the recently added default hooks.
Signed-off-by: Jonas Bonn <jonas@southpole.se>
Acked-by: Mike Frysinger <vapier@gentoo.org>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Tested-by: Michal Simek <monstr@monstr.eu>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Jonas Bonn [Thu, 30 Jun 2011 19:22:11 +0000 (21:22 +0200)]
modules: add default loader hook implementations
The module loader code allows architectures to hook into the code by
providing a small number of entry points that each arch must implement.
This patch provides __weakly linked generic implementations of these
entry points for architectures that don't need to do anything special.
Signed-off-by: Jonas Bonn <jonas@southpole.se>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Satoru Moriya [Thu, 26 May 2011 23:38:04 +0000 (19:38 -0400)]
param: fix return value handling in param_set_*
In STANDARD_PARAM_DEF, param_set_* handles the case in which strtolfn
returns -EINVAL but it may return -ERANGE. If it returns -ERANGE,
param_set_* may set uninitialized value to the paramerter. We should handle
both cases.
The one of the cases in which strtolfn() returns -ERANGE is following:
*Type of module parameter is long
*Set the parameter more than LONG_MAX
Signed-off-by: Satoru Moriya <satoru.moriya@hds.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Alex Williamson [Thu, 14 Jul 2011 19:27:03 +0000 (13:27 -0600)]
KVM: IOMMU: Disable device assignment without interrupt remapping
IOMMU interrupt remapping support provides a further layer of
isolation for device assignment by preventing arbitrary interrupt
block DMA writes by a malicious guest from reaching the host. By
default, we should require that the platform provides interrupt
remapping support, with an opt-in mechanism for existing behavior.
Both AMD IOMMU and Intel VT-d2 hardware support interrupt
remapping, however we currently only have software support on
the Intel side. Users wishing to re-enable device assignment
when interrupt remapping is not supported on the platform can
use the "allow_unsafe_assigned_interrupts=1" module option.
[avi: break long lines]
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Xiao Guangrong [Mon, 11 Jul 2011 19:34:24 +0000 (03:34 +0800)]
KVM: MMU: trace mmio page fault
Add tracepoints to trace mmio page fault
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Xiao Guangrong [Mon, 11 Jul 2011 19:33:44 +0000 (03:33 +0800)]
KVM: MMU: mmio page fault support
The idea is from Avi:
| We could cache the result of a miss in an spte by using a reserved bit, and
| checking the page fault error code (or seeing if we get an ept violation or
| ept misconfiguration), so if we get repeated mmio on a page, we don't need to
| search the slot list/tree.
| (https://lkml.org/lkml/2011/2/22/221)
When the page fault is caused by mmio, we cache the info in the shadow page
table, and also set the reserved bits in the shadow page table, so if the mmio
is caused again, we can quickly identify it and emulate it directly
Searching mmio gfn in memslots is heavy since we need to walk all memeslots, it
can be reduced by this feature, and also avoid walking guest page table for
soft mmu.
[jan: fix operator precedence issue]
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Xiao Guangrong [Mon, 11 Jul 2011 19:32:54 +0000 (03:32 +0800)]
KVM: MMU: reorganize struct kvm_shadow_walk_iterator
Reorganize it for good using the cache
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Xiao Guangrong [Mon, 11 Jul 2011 19:32:13 +0000 (03:32 +0800)]
KVM: MMU: lockless walking shadow page table
Use rcu to protect shadow pages table to be freed, so we can safely walk it,
it should run fastly and is needed by mmio page fault
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Xiao Guangrong [Mon, 11 Jul 2011 19:31:28 +0000 (03:31 +0800)]
KVM: MMU: do not need atomicly to set/clear spte
Now, the spte is just from nonprsent to present or present to nonprsent, so
we can use some trick to set/clear spte non-atomicly as linux kernel does
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Xiao Guangrong [Mon, 11 Jul 2011 19:30:35 +0000 (03:30 +0800)]
KVM: MMU: introduce the rules to modify shadow page table
Introduce some interfaces to modify spte as linux kernel does:
- mmu_spte_clear_track_bits, it set the spte from present to nonpresent, and
track the stat bits(accessed/dirty) of spte
- mmu_spte_clear_no_track, the same as mmu_spte_clear_track_bits except
tracking the stat bits
- mmu_spte_set, set spte from nonpresent to present
- mmu_spte_update, only update the stat bits
Now, it does not allowed to set spte from present to present, later, we can
drop the atomicly opration for X86_32 host, and it is the preparing work to
get spte on X86_32 host out of the mmu lock
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Xiao Guangrong [Mon, 11 Jul 2011 19:29:38 +0000 (03:29 +0800)]
KVM: MMU: abstract some functions to handle fault pfn
Introduce handle_abnormal_pfn to handle fault pfn on page fault path,
introduce mmu_invalid_pfn to handle fault pfn on prefetch path
It is the preparing work for mmio page fault support
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Xiao Guangrong [Mon, 11 Jul 2011 19:28:54 +0000 (03:28 +0800)]
KVM: MMU: filter out the mmio pfn from the fault pfn
If the page fault is caused by mmio, the gfn can not be found in memslots, and
'bad_pfn' is returned on gfn_to_hva path, so we can use 'bad_pfn' to identify
the mmio page fault.
And, to clarify the meaning of mmio pfn, we return fault page instead of bad
page when the gfn is not allowd to prefetch
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Xiao Guangrong [Mon, 11 Jul 2011 19:28:04 +0000 (03:28 +0800)]
KVM: MMU: remove bypass_guest_pf
The idea is from Avi:
| Maybe it's time to kill off bypass_guest_pf=1. It's not as effective as
| it used to be, since unsync pages always use shadow_trap_nonpresent_pte,
| and since we convert between the two nonpresent_ptes during sync and unsync.
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Xiao Guangrong [Mon, 11 Jul 2011 19:27:14 +0000 (03:27 +0800)]
KVM: MMU: split kvm_mmu_free_page
Split kvm_mmu_free_page to kvm_mmu_isolate_page and
kvm_mmu_free_page
One is used to remove the page from cache under mmu lock and the other is
used to free page table out of mmu lock
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>