From: Kirill A. Shutemov Date: Wed, 3 Feb 2016 00:57:15 +0000 (-0800) Subject: thp: limit number of object to scan on deferred_split_scan() X-Git-Url: https://git.stricted.de/?a=commitdiff_plain;h=e3ae19535c66;p=GitHub%2Fmoto-9609%2Fandroid_kernel_motorola_exynos9610.git thp: limit number of object to scan on deferred_split_scan() If we have a lot of pages in queue to be split, deferred_split_scan() can spend unreasonable amount of time under spinlock with disabled interrupts. Let's cap number of pages to split on scan by sc->nr_to_scan. Signed-off-by: Kirill A. Shutemov Reported-by: Andrea Arcangeli Reviewed-by: Andrea Arcangeli Cc: Hugh Dickins Cc: Dave Hansen Cc: Mel Gorman Cc: Rik van Riel Cc: Vlastimil Babka Cc: "Aneesh Kumar K.V" Cc: Johannes Weiner Cc: Michal Hocko Cc: Jerome Marchand Cc: Sasha Levin Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 7aae72114583..c1411961167e 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3478,17 +3478,19 @@ static unsigned long deferred_split_scan(struct shrinker *shrink, int split = 0; spin_lock_irqsave(&pgdata->split_queue_lock, flags); - list_splice_init(&pgdata->split_queue, &list); - /* Take pin on all head pages to avoid freeing them under us */ list_for_each_safe(pos, next, &list) { page = list_entry((void *)pos, struct page, mapping); page = compound_head(page); - /* race with put_compound_page() */ - if (!get_page_unless_zero(page)) { + if (get_page_unless_zero(page)) { + list_move(page_deferred_list(page), &list); + } else { + /* We lost race with put_compound_page() */ list_del_init(page_deferred_list(page)); pgdata->split_queue_len--; } + if (!--sc->nr_to_scan) + break; } spin_unlock_irqrestore(&pgdata->split_queue_lock, flags);