From: Wu Fengguang Date: Wed, 25 May 2011 00:12:30 +0000 (-0700) Subject: readahead: trigger mmap sequential readahead on PG_readahead X-Git-Url: https://git.stricted.de/?a=commitdiff_plain;h=2cbea1d3ab11946885d37a2461072ee4d687cb4e;p=GitHub%2FLineageOS%2Fandroid_kernel_samsung_universal7580.git readahead: trigger mmap sequential readahead on PG_readahead Previously the mmap sequential readahead is triggered by updating ra->prev_pos on each page fault and compare it with current page offset. It costs dirtying the cache line on each _minor_ page fault. So remove the ra->prev_pos recording, and instead tag PG_readahead to trigger the possible sequential readahead. It's not only more simple, but also will work more reliably and reduce cache line bouncing on concurrent page faults on shared struct file. In the mosbench exim benchmark which does multi-threaded page faults on shared struct file, the ra->mmap_miss and ra->prev_pos updates are found to cause excessive cache line bouncing on tmpfs, which actually disabled readahead totally (shmem_backing_dev_info.ra_pages == 0). So remove the ra->prev_pos recording, and instead tag PG_readahead to trigger the possible sequential readahead. It's not only more simple, but also will work more reliably on concurrent reads on shared struct file. Signed-off-by: Wu Fengguang Tested-by: Tim Chen Reported-by: Andi Kleen Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- diff --git a/mm/filemap.c b/mm/filemap.c index e5131392d32..68e782b3d3d 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1559,8 +1559,7 @@ static void do_sync_mmap_readahead(struct vm_area_struct *vma, if (!ra->ra_pages) return; - if (VM_SequentialReadHint(vma) || - offset - 1 == (ra->prev_pos >> PAGE_CACHE_SHIFT)) { + if (VM_SequentialReadHint(vma)) { page_cache_sync_readahead(mapping, ra, file, offset, ra->ra_pages); return; @@ -1583,7 +1582,7 @@ static void do_sync_mmap_readahead(struct vm_area_struct *vma, ra_pages = max_sane_readahead(ra->ra_pages); ra->start = max_t(long, 0, offset - ra_pages / 2); ra->size = ra_pages; - ra->async_size = 0; + ra->async_size = ra_pages / 4; ra_submit(ra, mapping, file); } @@ -1689,7 +1688,6 @@ retry_find: return VM_FAULT_SIGBUS; } - ra->prev_pos = (loff_t)offset << PAGE_CACHE_SHIFT; vmf->page = page; return ret | VM_FAULT_LOCKED;