x86/asm: Fix comment in return_from_SYSCALL_64()
authorKirill A. Shutemov <kirill.shutemov@linux.intel.com>
Tue, 6 Jun 2017 11:31:21 +0000 (14:31 +0300)
committerIngo Molnar <mingo@kernel.org>
Tue, 13 Jun 2017 06:56:51 +0000 (08:56 +0200)
On x86-64 __VIRTUAL_MASK_SHIFT depends on paging mode now.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-arch@vger.kernel.org
Cc: linux-mm@kvack.org
Link: http://lkml.kernel.org/r/20170606113133.22974-3-kirill.shutemov@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
arch/x86/entry/entry_64.S

index 4a4c0834f9659bd9b4855e23801d47103e644f09..a9a8027a6c0eaa748541282448f41b9eec834d6c 100644 (file)
@@ -265,7 +265,8 @@ return_from_SYSCALL_64:
         * If width of "canonical tail" ever becomes variable, this will need
         * to be updated to remain correct on both old and new CPUs.
         *
-        * Change top 16 bits to be the sign-extension of 47th bit
+        * Change top bits to match most significant bit (47th or 56th bit
+        * depending on paging mode) in the address.
         */
        shl     $(64 - (__VIRTUAL_MASK_SHIFT+1)), %rcx
        sar     $(64 - (__VIRTUAL_MASK_SHIFT+1)), %rcx