From: Linus Torvalds Date: Tue, 15 Mar 2016 17:45:39 +0000 (-0700) Subject: Merge branch 'x86-mm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git... X-Git-Url: https://git.stricted.de/?a=commitdiff_plain;h=13c76ad87216513db2487aac84155aa57dfd46ce;p=GitHub%2FLineageOS%2FG12%2Fandroid_kernel_amlogic_linux-4.9.git Merge branch 'x86-mm-for-linus' of git://git./linux/kernel/git/tip/tip Pull x86 mm updates from Ingo Molnar: "The main changes in this cycle were: - Enable full ASLR randomization for 32-bit programs (Hector Marco-Gisbert) - Add initial minimal INVPCI support, to flush global mappings (Andy Lutomirski) - Add KASAN enhancements (Andrey Ryabinin) - Fix mmiotrace for huge pages (Karol Herbst) - ... misc cleanups and small enhancements" * 'x86-mm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/mm/32: Enable full randomization on i386 and X86_32 x86/mm/kmmio: Fix mmiotrace for hugepages x86/mm: Avoid premature success when changing page attributes x86/mm/ptdump: Remove paravirt_enabled() x86/mm: Fix INVPCID asm constraint x86/dmi: Switch dmi_remap() from ioremap() [uncached] to ioremap_cache() x86/mm: If INVPCID is available, use it to flush global mappings x86/mm: Add a 'noinvpcid' boot option to turn off INVPCID x86/mm: Add INVPCID helpers x86/kasan: Write protect kasan zero shadow x86/kasan: Clear kasan_zero_page after TLB flush x86/mm/numa: Check for failures in numa_clear_kernel_node_hotplug() x86/mm/numa: Clean up numa_clear_kernel_node_hotplug() x86/mm: Make kmap_prot into a #define x86/mm/32: Set NX in __supported_pte_mask before enabling paging x86/mm: Streamline and restore probe_memory_block_size() --- 13c76ad87216513db2487aac84155aa57dfd46ce diff --cc arch/x86/include/asm/tlbflush.h index 0bb31cb8c73b,d0cce90b0855..c24b4224d439 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@@ -5,9 -5,56 +5,57 @@@ #include #include +#include #include + static inline void __invpcid(unsigned long pcid, unsigned long addr, + unsigned long type) + { + struct { u64 d[2]; } desc = { { pcid, addr } }; + + /* + * The memory clobber is because the whole point is to invalidate + * stale TLB entries and, especially if we're flushing global + * mappings, we don't want the compiler to reorder any subsequent + * memory accesses before the TLB flush. + * + * The hex opcode is invpcid (%ecx), %eax in 32-bit mode and + * invpcid (%rcx), %rax in long mode. + */ + asm volatile (".byte 0x66, 0x0f, 0x38, 0x82, 0x01" + : : "m" (desc), "a" (type), "c" (&desc) : "memory"); + } + + #define INVPCID_TYPE_INDIV_ADDR 0 + #define INVPCID_TYPE_SINGLE_CTXT 1 + #define INVPCID_TYPE_ALL_INCL_GLOBAL 2 + #define INVPCID_TYPE_ALL_NON_GLOBAL 3 + + /* Flush all mappings for a given pcid and addr, not including globals. */ + static inline void invpcid_flush_one(unsigned long pcid, + unsigned long addr) + { + __invpcid(pcid, addr, INVPCID_TYPE_INDIV_ADDR); + } + + /* Flush all mappings for a given PCID, not including globals. */ + static inline void invpcid_flush_single_context(unsigned long pcid) + { + __invpcid(pcid, 0, INVPCID_TYPE_SINGLE_CTXT); + } + + /* Flush all mappings, including globals, for all PCIDs. */ + static inline void invpcid_flush_all(void) + { + __invpcid(0, 0, INVPCID_TYPE_ALL_INCL_GLOBAL); + } + + /* Flush all mappings for all PCIDs except globals. */ + static inline void invpcid_flush_all_nonglobals(void) + { + __invpcid(0, 0, INVPCID_TYPE_ALL_NON_GLOBAL); + } + #ifdef CONFIG_PARAVIRT #include #else