The protect_page() function was incorrectly setting up the hardware tables
based on possible access capabilities rather than the actual requested
values. This means we would grant more access to mmap-ed pages than we
should have. Once we fix this, we need to tweak the signal generated by
such accesses to aline ourselves with other ports. This allows the LTP
mmap0{5,6,7} cases to run properly.
Signed-off-by: Graf Yang <graf.yang@analog.com>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
unsigned long idx = page >> 5;
unsigned long bit = 1 << (page & 31);
- if (flags & VM_MAYREAD)
+ if (flags & VM_READ)
mask[idx] |= bit;
else
mask[idx] &= ~bit;
mask += page_mask_nelts;
- if (flags & VM_MAYWRITE)
+ if (flags & VM_WRITE)
mask[idx] |= bit;
else
mask[idx] &= ~bit;
mask += page_mask_nelts;
- if (flags & VM_MAYEXEC)
+ if (flags & VM_EXEC)
mask[idx] |= bit;
else
mask[idx] &= ~bit;
/* 0x23 - Data CPLB protection violation, handled here */
case VEC_CPLB_VL:
info.si_code = ILL_CPLB_VI;
- sig = SIGBUS;
+ sig = SIGSEGV;
strerror = KERN_NOTICE EXC_0x23(KERN_NOTICE);
CHK_DEBUGGER_TRAP_MAYBE();
break;