x86/kprobes: Fix kernel crash when probing .entry_trampoline code
authorFrancis Deslauriers <francis.deslauriers@efficios.com>
Fri, 9 Mar 2018 03:18:12 +0000 (22:18 -0500)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Thu, 15 Mar 2018 09:54:38 +0000 (10:54 +0100)
commit c07a8f8b08ba683ea24f3ac9159f37ae94daf47f upstream.

Disable the kprobe probing of the entry trampoline:

.entry_trampoline is a code area that is used to ensure page table
isolation between userspace and kernelspace.

At the beginning of the execution of the trampoline, we load the
kernel's CR3 register. This has the effect of enabling the translation
of the kernel virtual addresses to physical addresses. Before this
happens most kernel addresses can not be translated because the running
process' CR3 is still used.

If a kprobe is placed on the trampoline code before that change of the
CR3 register happens the kernel crashes because int3 handling pages are
not accessible.

To fix this, add the .entry_trampoline section to the kprobe blacklist
to prohibit the probing of code before all the kernel pages are
accessible.

Signed-off-by: Francis Deslauriers <francis.deslauriers@efficios.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: mathieu.desnoyers@efficios.com
Cc: mhiramat@kernel.org
Link: http://lkml.kernel.org/r/1520565492-4637-2-git-send-email-francis.deslauriers@efficios.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
arch/x86/include/asm/sections.h
arch/x86/kernel/kprobes/core.c
arch/x86/kernel/vmlinux.lds.S

index d6baf23782bcc23811c12bac8a6a181c2a6cdb5b..5c019d23d06b1168da0ea965d7c35bebd4d02307 100644 (file)
@@ -10,6 +10,7 @@ extern struct exception_table_entry __stop___ex_table[];
 
 #if defined(CONFIG_X86_64)
 extern char __end_rodata_hpage_align[];
+extern char __entry_trampoline_start[], __entry_trampoline_end[];
 #endif
 
 #endif /* _ASM_X86_SECTIONS_H */
index 0742491cbb734d29e1be790d890aeb5271ea6eb4..ce06ec9c2323fad4f477b83e91b84c51c0a58c99 100644 (file)
@@ -1149,10 +1149,18 @@ NOKPROBE_SYMBOL(longjmp_break_handler);
 
 bool arch_within_kprobe_blacklist(unsigned long addr)
 {
+       bool is_in_entry_trampoline_section = false;
+
+#ifdef CONFIG_X86_64
+       is_in_entry_trampoline_section =
+               (addr >= (unsigned long)__entry_trampoline_start &&
+                addr < (unsigned long)__entry_trampoline_end);
+#endif
        return  (addr >= (unsigned long)__kprobes_text_start &&
                 addr < (unsigned long)__kprobes_text_end) ||
                (addr >= (unsigned long)__entry_text_start &&
-                addr < (unsigned long)__entry_text_end);
+                addr < (unsigned long)__entry_text_end) ||
+               is_in_entry_trampoline_section;
 }
 
 int __init arch_init_kprobes(void)
index 9b138a06c1a468e6a6d3fe41748abef3a436ace3..b854ebf5851b7c8fb6225b53e7d3a81b16ec43db 100644 (file)
@@ -118,9 +118,11 @@ SECTIONS
 
 #ifdef CONFIG_X86_64
                . = ALIGN(PAGE_SIZE);
+               VMLINUX_SYMBOL(__entry_trampoline_start) = .;
                _entry_trampoline = .;
                *(.entry_trampoline)
                . = ALIGN(PAGE_SIZE);
+               VMLINUX_SYMBOL(__entry_trampoline_end) = .;
                ASSERT(. - _entry_trampoline == PAGE_SIZE, "entry trampoline is too big");
 #endif