As part of the new large address space support, processes start out life with a
128TB virtual address space. However when calling mmap() a process can pass a
hint address, and if that hint is > 128TB the kernel will use the full 512TB
address space to try and satisfy the mmap() request.
Currently we have a check that the hint is > 128TB and < 512TB (TASK_SIZE),
which was added as an optimisation to avoid updating addr_limit unnecessarily
and also to avoid calling slice_flush_segments() on all CPUs more than
necessary.
However this has the user-visible side effect that an mmap() hint above 512TB
does not search the full address space unless a preceding mmap() used a hint
value > 128TB && < 512TB.
So fix it to treat any hint above 128TB as a hint to search the full address
space, instead of checking the hint against TASK_SIZE, we instead check if the
addr_limit is already == TASK_SIZE.
This also brings the ABI in-line with what is proposed on x86. ie, that a hint
address above 128TB up to and including (2^64)-1 is an indication to search the
full address space.
Fixes:
f4ea6dcb08ea2c (powerpc/mm: Enable mappings above 128TB)
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
struct vm_area_struct *vma;
struct vm_unmapped_area_info info;
- if (unlikely(addr > mm->context.addr_limit && addr < TASK_SIZE))
+ if (unlikely(addr > mm->context.addr_limit &&
+ mm->context.addr_limit != TASK_SIZE))
mm->context.addr_limit = TASK_SIZE;
if (len > mm->task_size - mmap_min_addr)
unsigned long addr = addr0;
struct vm_unmapped_area_info info;
- if (unlikely(addr > mm->context.addr_limit && addr < TASK_SIZE))
+ if (unlikely(addr > mm->context.addr_limit &&
+ mm->context.addr_limit != TASK_SIZE))
mm->context.addr_limit = TASK_SIZE;
/* requested length too big for entire address space */
/*
* Check if we need to expland slice area.
*/
- if (unlikely(addr > mm->context.addr_limit && addr < TASK_SIZE)) {
+ if (unlikely(addr > mm->context.addr_limit &&
+ mm->context.addr_limit != TASK_SIZE)) {
mm->context.addr_limit = TASK_SIZE;
on_each_cpu(slice_flush_segments, mm, 1);
}