vmalloc: walk vmap_areas by sorted list instead of rb_next()
authorHong zhi guo <honkiko@gmail.com>
Tue, 31 Jul 2012 23:41:35 +0000 (16:41 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Wed, 1 Aug 2012 01:42:39 +0000 (18:42 -0700)
There's a walk by repeating rb_next to find a suitable hole.  Could be
simply replaced by walk on the sorted vmap_area_list.  More simpler and
efficient.

Mutation of the list and tree only happens in pair within
__insert_vmap_area and __free_vmap_area, under protection of
vmap_area_lock.  The patch code is also under vmap_area_lock, so the list
walk is safe, and consistent with the tree walk.

Tested on SMP by repeating batch of vmalloc anf vfree for random sizes and
rounds for hours.

Signed-off-by: Hong Zhiguo <honkiko@gmail.com>
Cc: Nick Piggin <npiggin@kernel.dk>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/vmalloc.c

index e03f4c7307a5b2dc4d7535815c69a786b2124f5d..7e25ee3ce6e5485649ba96f027b184357a2919f8 100644 (file)
@@ -413,11 +413,11 @@ nocache:
                if (addr + size - 1 < addr)
                        goto overflow;
 
-               n = rb_next(&first->rb_node);
-               if (n)
-                       first = rb_entry(n, struct vmap_area, rb_node);
-               else
+               if (list_is_last(&first->list, &vmap_area_list))
                        goto found;
+
+               first = list_entry(first->list.next,
+                               struct vmap_area, list);
        }
 
 found: