GitHub/mt8127/android_kernel_alcatel_ttab.git
16 years agox86: cpa: fix the self-test
Ingo Molnar [Wed, 30 Jan 2008 12:34:09 +0000 (13:34 +0100)]
x86: cpa: fix the self-test

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: rodata config hookup
Ingo Molnar [Wed, 30 Jan 2008 12:34:09 +0000 (13:34 +0100)]
x86: rodata config hookup

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: enable CONFIG_DEBUG_PAGEALLOC more widely
Ingo Molnar [Wed, 30 Jan 2008 12:34:09 +0000 (13:34 +0100)]
x86: enable CONFIG_DEBUG_PAGEALLOC more widely

make CONFIG_DEBUG_PAGEALLOC universally available.

CONFIG_HIBERNATION and CONFIG_HUGETLBFS was disabling it, for no
particular reason.

If there are any unfixed bugs here we'll fix it, but do not disable
vital debugging facilities like that ..

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: init memory debugging
Ingo Molnar [Wed, 30 Jan 2008 12:34:09 +0000 (13:34 +0100)]
x86: init memory debugging

debug incorrect/late access to init memory, by permanently unmapping
the init memory ranges. Depends on CONFIG_DEBUG_PAGEALLOC=y.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: move misplaced rodata check call
Arjan van de Ven [Wed, 30 Jan 2008 12:34:09 +0000 (13:34 +0100)]
x86: move misplaced rodata check call

It looks like a mismerge put the rodata self-check in the wrong spot; move
it to the right place after marking the .rodata section read only.

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: fix clflush_page_range logic
Ingo Molnar [Wed, 30 Jan 2008 12:34:09 +0000 (13:34 +0100)]
x86: fix clflush_page_range logic

only present ptes must be flushed.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agox86: optimize clflush
Thomas Gleixner [Wed, 30 Jan 2008 12:34:08 +0000 (13:34 +0100)]
x86: optimize clflush

clflush is sufficient to be issued on one CPU. The invalidation is
broadcast throughout the coherence domain.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agox86: clflush_page_range needs mfence
Thomas Gleixner [Wed, 30 Jan 2008 12:34:08 +0000 (13:34 +0100)]
x86: clflush_page_range needs mfence

clflush is an unordered operation with respect to other memory
traffic, including other CLFLUSH instructions. This needs proper
fencing with mfence.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agox86: cpa: rename global_flush_tlb() to cpa_flush_all()
Thomas Gleixner [Wed, 30 Jan 2008 12:34:08 +0000 (13:34 +0100)]
x86: cpa: rename global_flush_tlb() to cpa_flush_all()

The function name global_flush_tlb() suggests something different from
what the function really does. Rename it to cpa_flush_all(), which is an
understandable counterpart to cpa_flush_range().

no global visibility of the old API anymore.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agox86: cpa: implement clflush optimization
Thomas Gleixner [Wed, 30 Jan 2008 12:34:08 +0000 (13:34 +0100)]
x86: cpa: implement clflush optimization

Use clflush on CPUs which support this.

clflush is only used when the page attribute operation has been
successful. On CPUs which do not support clflush and in the case of
error the old fashioned global_flush_tlb() is called.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agox86: cpa use the new set_clr function
Thomas Gleixner [Wed, 30 Jan 2008 12:34:08 +0000 (13:34 +0100)]
x86: cpa use the new set_clr function

Convert cpa_set and cpa_clear to call the new set_clr function.
Seperate out the debug helpers.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agox86: cpa create set_and_clr function
Thomas Gleixner [Wed, 30 Jan 2008 12:34:08 +0000 (13:34 +0100)]
x86: cpa create set_and_clr function

Create a set_and_clr function to avoid the duplicate loops. Allows
also to do combined operations for optimization.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agox86: cpa move the flush into set and clear functions
Thomas Gleixner [Wed, 30 Jan 2008 12:34:08 +0000 (13:34 +0100)]
x86: cpa move the flush into set and clear functions

To avoid the modification of the flush code for the clflush
implementation, move the flush into the set and clear functions and
provide helper functions for the debugging code.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agox86: add testcases for RODATA and NX protections/attributes
Arjan van de Ven [Wed, 30 Jan 2008 12:34:08 +0000 (13:34 +0100)]
x86: add testcases for RODATA and NX protections/attributes

Latest update; I now have 4 NX tests, but 2 fail so they're #if 0'd.
I also cleaned up the NX test code quite a bit, and got rid of the ugly
exception table sorting stuff.

From: Arjan van de Ven <arjan@linux.intel.com>

This patch adds testcases for the CONFIG_DEBUG_RODATA configuration option
as well as the NX CPU feature/mappings. Both testcases can move to tests/
once that patch gets merged into mainline.
(I'm half considering moving the rodata test into mm/init.c but I'll
wait with that until init.c is unified)

As part of this I had to fix a not-quite-right alignment in the vmlinux.lds.h
for the RODATA sections, which lead to 1 page less being marked read only.

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: ioremap KERN_INFO
Ingo Molnar [Wed, 30 Jan 2008 12:34:08 +0000 (13:34 +0100)]
x86: ioremap KERN_INFO

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: cpa: clean up change_page_attr_set/clear()
Thomas Gleixner [Wed, 30 Jan 2008 12:34:08 +0000 (13:34 +0100)]
x86: cpa: clean up change_page_attr_set/clear()

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: cpa: fix loop
Ingo Molnar [Wed, 30 Jan 2008 12:34:07 +0000 (13:34 +0100)]
x86: cpa: fix loop

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: cpa: fix split thinko
Thomas Gleixner [Wed, 30 Jan 2008 12:34:07 +0000 (13:34 +0100)]
x86: cpa: fix split thinko

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: make sure initmem is writable
Arjan van de Ven [Wed, 30 Jan 2008 12:34:07 +0000 (13:34 +0100)]
x86: make sure initmem is writable

When we free initmem, various rodata and CPA checks may have left
memory read only.. this patch ensures that the memory is writable
before we free it.

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: fix pageattr-selftest
Arjan van de Ven [Wed, 30 Jan 2008 12:34:07 +0000 (13:34 +0100)]
x86: fix pageattr-selftest

In Ingo's testing, he found a bug in the CPA selftest code. What would
happen is that the test would call change_page_attr_addr on a range of
memory, part of which was read only, part of which was writable. The
only thing the test wanted to change was the global bit...

What actually happened was that the selftest would take the permissions
of the first page, and then the change_page_attr_addr call would then
set the permissions of the entire range to this first page. In the
rodata section case, this resulted in pages after the .rodata becoming
read only... which made the kernel rather unhappy in many interesting
ways.

This is just another example of how dangerous the cpa API is (was); this
patch changes the test to use the incremental clear/set APIs
instead, and it changes the clear/set implementation to work on a 1 page
at a time basis.

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: remove flush_agp_mappings()
Ingo Molnar [Wed, 30 Jan 2008 12:34:07 +0000 (13:34 +0100)]
x86: remove flush_agp_mappings()

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: cpa: move flush to cpa
Thomas Gleixner [Wed, 30 Jan 2008 12:34:07 +0000 (13:34 +0100)]
x86: cpa: move flush to cpa

The set_memory_* and set_pages_* family of API's currently requires the
callers to do a global tlb flush after the function call; forgetting this is
a very nasty deathtrap. This patch moves the global tlb flush into
each of the callers

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: make various pageattr.c functions static
Arjan van de Ven [Wed, 30 Jan 2008 12:34:07 +0000 (13:34 +0100)]
x86: make various pageattr.c functions static

change_page_attr_add is only used in pageattr.c now, so we can
make this function static.
change_page_attr() isn't used anywere at all anymore; this function
is a really bad API anyway so just remove the bloat entirely.

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: cpa: set_memory_notpresent()
Ingo Molnar [Wed, 30 Jan 2008 12:34:07 +0000 (13:34 +0100)]
x86: cpa: set_memory_notpresent()

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: cpa: convert ioremap to new API
Thomas Gleixner [Wed, 30 Jan 2008 12:34:06 +0000 (13:34 +0100)]
x86: cpa: convert ioremap to new API

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: fix ioremap API
Thomas Gleixner [Wed, 30 Jan 2008 12:34:06 +0000 (13:34 +0100)]
x86: fix ioremap API

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: fix ioremap RAM check
Thomas Gleixner [Wed, 30 Jan 2008 12:34:06 +0000 (13:34 +0100)]
x86: fix ioremap RAM check

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: fix the missing BIOS area check in page_is_ram
Thomas Gleixner [Wed, 30 Jan 2008 12:34:06 +0000 (13:34 +0100)]
x86: fix the missing BIOS area check in page_is_ram

page_is_ram has a FIXME since ages, which reminds to sanity check the
BIOS area between 640k and 1M, which is sometimes falsely reported as
RAM in the e820 tables.

Implement the sanity check. Move the BIOS range defines from
pageattr.c into e820.h to avoid duplicate defines.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agox86: move page_is_ram() function
Thomas Gleixner [Wed, 30 Jan 2008 12:34:06 +0000 (13:34 +0100)]
x86: move page_is_ram() function

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: deprecate change_page_attr() for drivers
Arjan van de Ven [Wed, 30 Jan 2008 12:34:06 +0000 (13:34 +0100)]
x86: deprecate change_page_attr() for drivers

With the introduction of the new API, no driver or non-archcore code needs
to use c-p-a anymore, so this patch also deprecates the EXPORT_SYMBOL of CPA
(it's a horrible API after all).

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agox86: convert CPA users to the new set_page_ API
Arjan van de Ven [Wed, 30 Jan 2008 12:34:06 +0000 (13:34 +0100)]
x86: convert CPA users to the new set_page_ API

This patch converts various users of change_page_attr() to the new,
more intent driven set_page_*/set_memory_* API set.

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agox86: a new API for drivers/etc to control cache and other page attributes
Arjan van de Ven [Wed, 30 Jan 2008 12:34:06 +0000 (13:34 +0100)]
x86: a new API for drivers/etc to control cache and other page attributes

Right now, if drivers or other code want to change, say, a cache attribute of a
page, the only API they have is change_page_attr(). c-p-a is a really bad API
for this, because it forces the caller to know *ALL* the attributes he wants
for the page, not just the 1 thing he wants to change. So code that wants to
set a page uncachable, needs to be aware of the NX status as well etc etc etc.

This patch introduces a set of new APIs for this, set_pages_<attr> and
set_memory_<attr>, that offer a logical change to the user, and leave all
attributes not implied by the requested logical change alone.

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agox86: cpa: move clflush_cache_range()
Ingo Molnar [Wed, 30 Jan 2008 12:34:06 +0000 (13:34 +0100)]
x86: cpa: move clflush_cache_range()

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: unify ioremap_32 and _64
Thomas Gleixner [Wed, 30 Jan 2008 12:34:05 +0000 (13:34 +0100)]
x86: unify ioremap_32 and _64

Unify the now identical ioremap_32.c and ioremap_64.c into the
same ioremap.c file. No code changed.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agox86: unify ioremap
Thomas Gleixner [Wed, 30 Jan 2008 12:34:05 +0000 (13:34 +0100)]
x86: unify ioremap

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: use remove_vm_are in ioremap_32 error path
Thomas Gleixner [Wed, 30 Jan 2008 12:34:05 +0000 (13:34 +0100)]
x86: use remove_vm_are in ioremap_32 error path

When ioremap_page_range fails, then we can use remove_vm_area instead
of vunmap safely.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agox86: __iomem annotations
Thomas Gleixner [Wed, 30 Jan 2008 12:34:05 +0000 (13:34 +0100)]
x86: __iomem annotations

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agox86: switch to change_page_attr_addr in ioremap_32.c
Thomas Gleixner [Wed, 30 Jan 2008 12:34:05 +0000 (13:34 +0100)]
x86: switch to change_page_attr_addr in ioremap_32.c

Use change_page_attr_addr() instead of change_page_attr(), which
simplifies the code significantly and matches the 64bit
implementation.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agox86: make c_p_a unconditional in ioremap
Thomas Gleixner [Wed, 30 Jan 2008 12:34:05 +0000 (13:34 +0100)]
x86: make c_p_a unconditional in ioremap

Make c_p_a unconditional for ioremap and iounmap. This ensures
complete consistency of the flags which are handed to
ioremap_page_range and the real flags in the mappings.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agox86: introduce max_pfn_mapped
Thomas Gleixner [Wed, 30 Jan 2008 12:34:05 +0000 (13:34 +0100)]
x86: introduce max_pfn_mapped

64bit uses end_pfn_map and 32bit uses max_low_pfn. There are several
files which have #ifdef'ed defines which map either to end_pfn_map or
max_low_pfn. Replace this by a universal define and clean up all the
other instances.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agox86: cleanup ioremap includes
Thomas Gleixner [Wed, 30 Jan 2008 12:34:05 +0000 (13:34 +0100)]
x86: cleanup ioremap includes

Get rid of the douplicate define of ISA_START/END_ADDRESS and use the
same headers in 32 and 64 bit code.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agox86: style cleanup of ioremap code
Thomas Gleixner [Wed, 30 Jan 2008 12:34:05 +0000 (13:34 +0100)]
x86: style cleanup of ioremap code

Fix the coding style before going further.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agox86: fix ioremap pgprot inconsistency
Thomas Gleixner [Wed, 30 Jan 2008 12:34:05 +0000 (13:34 +0100)]
x86: fix ioremap pgprot inconsistency

The pgprot flags which are handed into ioremap_page_range() are
different to those which are set in change_page_attr(). The
ioremap_page_range flags are executable, while the c_p_a flags are
not.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agox86: fix ioremap pgprot inconsistency
Thomas Gleixner [Wed, 30 Jan 2008 12:34:04 +0000 (13:34 +0100)]
x86: fix ioremap pgprot inconsistency

The pgprot flags which are handed into ioremap_page_range() are
different to those which are set in change_page_attr(). The
ioremap_page_range flags are executable, while the c_p_a flags are
not. Also make the mappings global (which is a NOP currently on 32bit,
although CPUs from PPRO+ onwards support it, but that's a separate
fix.)

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 years agox86: turn the check_exec function into function that
Arjan van de Ven [Wed, 30 Jan 2008 12:34:04 +0000 (13:34 +0100)]
x86: turn the check_exec function into function that

What the check_exec() function really is trying to do is enforce certain
bits in the pgprot that are required by the x86 architecture, but that
callers might not be aware of (such as NX bit exclusion of the BIOS
area for BIOS based PCI access; it's not uncommon to ioremap the BIOS
region for various purposes and normally ioremap() memory has the NX bit
set).

This patch turns the check_exec() function into static_protections()
which also is now used to make sure the kernel text area remains non-NX
and that the .rodata section remains read-only. If the architecture
ends up requiring more such mandatory prot settings for specific areas,
this is now a reasonable place to add these.

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: cpa: make self-test depend on DEBUG_KERNEL
Ingo Molnar [Wed, 30 Jan 2008 12:34:04 +0000 (13:34 +0100)]
x86: cpa: make self-test depend on DEBUG_KERNEL

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: ioremap_nocache fix
Huang, Ying [Wed, 30 Jan 2008 12:34:04 +0000 (13:34 +0100)]
x86: ioremap_nocache fix

This patch fixes a bug of ioremap_nocache. ioremap_nocache() will call
__ioremap() with flags != 0 to do the real work, which will call
change_page_attr_addr() if phys_addr + size - 1 < (end_pfn_map << PAGE_SHIFT).
But some pages between 0 ~ end_pfn_map << PAGE_SHIFT are not mapped by
identity map, this will make change_page_attr_addr failed.

This patch is based on latest x86 git and has been tested on x86_64 platform.

Signed-off-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: add PAGE_KERNEL_EXEC_NOCACHE
Ingo Molnar [Wed, 30 Jan 2008 12:34:04 +0000 (13:34 +0100)]
x86: add PAGE_KERNEL_EXEC_NOCACHE

add PAGE_KERNEL_EXEC_NOCACHE.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: fix NX bit handling in change_page_attr()
Huang, Ying [Wed, 30 Jan 2008 12:34:04 +0000 (13:34 +0100)]
x86: fix NX bit handling in change_page_attr()

This patch fixes a bug of change_page_attr/change_page_attr_addr on
Intel i386/x86_64 CPUs.  After changing page attribute to be
executable with these functions, the page remains un-executable on
Intel i386/x86_64 CPU.  Because on Intel i386/x86_64 CPU, only if the
"NX" bits of all three level page tables are cleared (PAE is enabled),
the corresponding page is executable (refer to section 4.13.2 of Intel
64 and IA-32 Architectures Software Developer's Manual).  So, the bug
is fixed through clearing the "NX" bit of PMD when splitting the huge
PMD.

Signed-off-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: change cpa to pfn based
Ingo Molnar [Wed, 30 Jan 2008 12:34:04 +0000 (13:34 +0100)]
x86: change cpa to pfn based

change CPA to pfn based.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: keep the BIOS area executable
Ingo Molnar [Wed, 30 Jan 2008 12:34:04 +0000 (13:34 +0100)]
x86: keep the BIOS area executable

keep the BIOS area executable.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: add PG_LEVEL enum
Thomas Gleixner [Wed, 30 Jan 2008 12:34:04 +0000 (13:34 +0100)]
x86: add PG_LEVEL enum

this way PG_LEVEL_1GB will be an easy change.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: clean up lookup_address() declarations
Thomas Gleixner [Wed, 30 Jan 2008 12:34:04 +0000 (13:34 +0100)]
x86: clean up lookup_address() declarations

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: clean up arch/x86/mm/pageattr.c
Ingo Molnar [Wed, 30 Jan 2008 12:34:04 +0000 (13:34 +0100)]
x86: clean up arch/x86/mm/pageattr.c

do some leftover cleanups in the now unified arch/x86/mm/pageattr.c
file.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: re-add clflush_cache_range()
Ingo Molnar [Wed, 30 Jan 2008 12:34:03 +0000 (13:34 +0100)]
x86: re-add clflush_cache_range()

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: unify pageattr_32.c and pageattr_64.c
Ingo Molnar [Wed, 30 Jan 2008 12:34:03 +0000 (13:34 +0100)]
x86: unify pageattr_32.c and pageattr_64.c

unify the now perfectly identical pageattr_32/64.c files - no code changed.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: prepare for pageattr.c unification
Ingo Molnar [Wed, 30 Jan 2008 12:34:03 +0000 (13:34 +0100)]
x86: prepare for pageattr.c unification

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: backmerge 64-bit details into 32-bit pageattr.c
Ingo Molnar [Wed, 30 Jan 2008 12:34:03 +0000 (13:34 +0100)]
x86: backmerge 64-bit details into 32-bit pageattr.c

backmerge 64-bit details into 32-bit pageattr.c.

the pageattr_32.c and pageattr_64.c files are now identical.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: enable DEBUG_PAGEALLOC on 64-bit
Ingo Molnar [Wed, 30 Jan 2008 12:34:03 +0000 (13:34 +0100)]
x86: enable DEBUG_PAGEALLOC on 64-bit

enable CONFIG_DEBUG_PAGEALLOC=y on 64-bit kernels too.

preliminary testing shows that it's working fine.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: add kernel_map_pages() to 64-bit
Ingo Molnar [Wed, 30 Jan 2008 12:34:03 +0000 (13:34 +0100)]
x86: add kernel_map_pages() to 64-bit

needed for DEBUG_PAGEALLOC support and for unification.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: return -EINVAL in __change_page_attr(), instead of 0
Ingo Molnar [Wed, 30 Jan 2008 12:34:03 +0000 (13:34 +0100)]
x86: return -EINVAL in __change_page_attr(), instead of 0

careful: might change driver behavior - but this is the right
return value.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: clean up differences between 64-bit and 32-bit
Ingo Molnar [Wed, 30 Jan 2008 12:34:03 +0000 (13:34 +0100)]
x86: clean up differences between 64-bit and 32-bit

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: 64-bit, add the new split_large_page() function
Ingo Molnar [Wed, 30 Jan 2008 12:34:03 +0000 (13:34 +0100)]
x86: 64-bit, add the new split_large_page() function

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: 64-bit pageattr.c, prepare for unification
Ingo Molnar [Wed, 30 Jan 2008 12:34:03 +0000 (13:34 +0100)]
x86: 64-bit pageattr.c, prepare for unification

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: change 64-bit pageattr to use set_pte_atomic()
Ingo Molnar [Wed, 30 Jan 2008 12:34:02 +0000 (13:34 +0100)]
x86: change 64-bit pageattr to use set_pte_atomic()

NOP change - same as set_pte().

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: change 64-bit __change_page_attr() to struct page
Ingo Molnar [Wed, 30 Jan 2008 12:34:02 +0000 (13:34 +0100)]
x86: change 64-bit __change_page_attr() to struct page

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: simplify __change_page_attr()
Ingo Molnar [Wed, 30 Jan 2008 12:34:01 +0000 (13:34 +0100)]
x86: simplify __change_page_attr()

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: introduce native_set_pte_atomic() on 64-bit too
Ingo Molnar [Wed, 30 Jan 2008 12:34:01 +0000 (13:34 +0100)]
x86: introduce native_set_pte_atomic() on 64-bit too

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: clean up and simplify 64-bit split_large_page()
Ingo Molnar [Wed, 30 Jan 2008 12:34:00 +0000 (13:34 +0100)]
x86: clean up and simplify 64-bit split_large_page()

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: unify header part of pageattr_64.c
Ingo Molnar [Wed, 30 Jan 2008 12:34:00 +0000 (13:34 +0100)]
x86: unify header part of pageattr_64.c

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: simplify pageattr_64.c
Ingo Molnar [Wed, 30 Jan 2008 12:33:59 +0000 (13:33 +0100)]
x86: simplify pageattr_64.c

simplify pageattr_64.c.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: prepare for the unification of the cpa code
Ingo Molnar [Wed, 30 Jan 2008 12:33:59 +0000 (13:33 +0100)]
x86: prepare for the unification of the cpa code

prepare for the unification of the cpa code, by unifying the
lookup_address() logic between 32-bit and 64-bit.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: prepare for the unification of the cpa code
Ingo Molnar [Wed, 30 Jan 2008 12:33:59 +0000 (13:33 +0100)]
x86: prepare for the unification of the cpa code

prepare for the unification of the cpa code, by unifying the
lookup_address() logic between 32-bit and 64-bit.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: cpa self-test, WARN_ON()
Ingo Molnar [Wed, 30 Jan 2008 12:33:58 +0000 (13:33 +0100)]
x86: cpa self-test, WARN_ON()

add a WARN_ON() to the cpa-self-test failure branch.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: do not PSE on CONFIG_DEBUG_PAGEALLOC=y
Ingo Molnar [Wed, 30 Jan 2008 12:33:58 +0000 (13:33 +0100)]
x86: do not PSE on CONFIG_DEBUG_PAGEALLOC=y

get more testing of the c_p_a() code done by not turning off
PSE on DEBUG_PAGEALLOC.

this simplifies the early pagetable setup code, and tests
the largepage-splitup code quite heavily.

In the end, all the largepages will be split up pretty quickly,
so there's no difference to how DEBUG_PAGEALLOC worked before.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: cpa: simplify locking
Ingo Molnar [Wed, 30 Jan 2008 12:33:57 +0000 (13:33 +0100)]
x86: cpa: simplify locking

further simplify cpa locking: since the largepage-split is a
slowpath, use the pgd_lock for the whole operation, intead
of the mmap_sem.

This also makes it suitable for DEBUG_PAGEALLOC purposes again.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: simplify cpa largepage split, #3
Ingo Molnar [Wed, 30 Jan 2008 12:33:57 +0000 (13:33 +0100)]
x86: simplify cpa largepage split, #3

simplify cpa largepage split: push the reference protection bits
into the largepage-splitting function.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: cpa self-test fixes
Ingo Molnar [Wed, 30 Jan 2008 12:33:56 +0000 (13:33 +0100)]
x86: cpa self-test fixes

cpa self-test fixes. change_page_attr_addr() was buggy, it
passed in a virtual address as a physical one.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: further cpa largepage-split cleanups
Ingo Molnar [Wed, 30 Jan 2008 12:33:56 +0000 (13:33 +0100)]
x86: further cpa largepage-split cleanups

further cpa largepage-split cleanups: make the splitup isolated
functionality, without leaking details back into __change_page_attr().

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: simplify 32-bit cpa largepage splitting
Ingo Molnar [Wed, 30 Jan 2008 12:33:55 +0000 (13:33 +0100)]
x86: simplify 32-bit cpa largepage splitting

simplify 32-bit cpa largepage splitting: do a pure split and repeat
the pte lookup to get the new pte modified.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: simplify the 32-bit cpa code
Ingo Molnar [Wed, 30 Jan 2008 12:33:55 +0000 (13:33 +0100)]
x86: simplify the 32-bit cpa code

simplify the 32-bit cpa code.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: fix some bugs about EFI runtime code mapping
Huang, Ying [Wed, 30 Jan 2008 12:33:55 +0000 (13:33 +0100)]
x86: fix some bugs about EFI runtime code mapping

This patch fixes some bugs of making EFI runtime code executable.

- Use change_page_attr in i386 too. Because the runtime code may be
  mapped not through ioremap.

- If there is no _PAGE_NX in __supported_pte_mask, the change_page_attr
  is not called.

- Make efi_ioremap map pages as PAGE_KERNEL_EXEC_NOCACHE, because EFI runtime
  code may be mapped through efi_ioremap.

Signed-off-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: fix more non-global TLB flushes
Ingo Molnar [Wed, 30 Jan 2008 12:33:54 +0000 (13:33 +0100)]
x86: fix more non-global TLB flushes

fix more __flush_tlb() instances, out of caution.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: fix early_ioremap() on 64-bit
Andi Kleen [Wed, 30 Jan 2008 12:33:54 +0000 (13:33 +0100)]
x86: fix early_ioremap() on 64-bit

Fix early_ioremap() on x86-64

I had ACPI failures on several machines since a few days. Symptom
was NUMA nodes not getting detected or worse cores not getting detected.
They all came from ACPI not being able to read various of its tables. I finally
bisected it down to Jeremy's "put _PAGE_GLOBAL into PAGE_KERNEL" change.
With that the fix was fairly obvious. The problem was that early_ioremap()
didn't use a "_all" flush that would affect the global PTEs too. So
with global bits getting used everywhere now an early_ioremap would
not actually flush a mapping if something else was mapped previously
on that slot (which can happen with early_iounmap inbetween)

This patch changes all flushes in init_64.c to be __flush_tlb_all()
and fixes the problem here.

Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: remove set_kernel_exec()
Andi Kleen [Wed, 30 Jan 2008 12:33:53 +0000 (13:33 +0100)]
x86: remove set_kernel_exec()

The SMP trampoline always runs in real mode, so making it executable
in the page tables doesn't make much sense because it executes
before page tables are set up. That was the only user of
set_kernel_exec(). Remove set_kernel_exec().

Signed-off-by: Andi Kleen <ak@suse.de>
Acked-by: Jan Beulich <jbeulich@novell.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: introduce canon_pgprot()
Andi Kleen [Wed, 30 Jan 2008 12:33:53 +0000 (13:33 +0100)]
x86: introduce canon_pgprot()

Introduce canon_pgprot()

Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: c_p_a() make it more robust against use of PAT bits
Andi Kleen [Wed, 30 Jan 2008 12:33:52 +0000 (13:33 +0100)]
x86: c_p_a() make it more robust against use of PAT bits

Use the page table level instead of the PSE bit to check if the PTE
is for a 4K page or not. This makes the code more robust when the PAT
bit is changed because the PAT bit on 4K pages is in the same position
as the PSE bit.

Signed-off-by: Andi Kleen <ak@suse.de>
Acked-by: Jan Beulich <jbeulich@novell.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: c_p_a() fix: reorder TLB / cache flushes to follow Intel recommendation
Andi Kleen [Wed, 30 Jan 2008 12:33:52 +0000 (13:33 +0100)]
x86: c_p_a() fix: reorder TLB / cache flushes to follow Intel recommendation

Intel recommends to first flush the TLBs and then the caches
on caching attribute changes. c_p_a() previously did it the
other way round. Reorder that.

The procedure is still not fully compliant to the Intel documentation
because Intel recommends a all CPU synchronization step between
the TLB flushes and the cache flushes.

However on all new Intel CPUs this is now meaningless anyways
because they support Self-Snoop and can skip the cache flush
step anyway.

[ mingo@elte.hu: decoupled from clflush and ported it to x86.git ]

Signed-off-by: Andi Kleen <ak@suse.de>
Acked-by: Jan Beulich <jbeulich@novell.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: fix c_p_a() boot crash
Andi Kleen [Wed, 30 Jan 2008 12:33:52 +0000 (13:33 +0100)]
x86: fix c_p_a() boot crash

fix:

> hm, i just found a failing 64-bit .config while testing your CPA
> patchset:
>
>  [    1.916541] CPA mapping 4k 0 large 2048 gb 0 x 0[0-0] miss 0
>  [    1.919874] Unable to handle kernel paging request at 000000000335aea8 RIP:
>  [    1.919874]  [<ffffffff8021d2d3>] change_page_attr+0x3/0x61
>  [    1.919874] PGD 0
>  [    1.919874] Oops: 0000 [1]
>  [    1.919874] CPU 0

This handles addresses which don't have a mem_map entry.

Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: don't drop NX bit in pte modifier functions on 32-bit
Andi Kleen [Wed, 30 Jan 2008 12:33:51 +0000 (13:33 +0100)]
x86: don't drop NX bit in pte modifier functions on 32-bit

The pte_* modifier functions that cleared bits dropped the NX bit on 32bit
PAE because they only worked in int, but NX is in bit 63. Fix that
by adding appropiate casts so that the arithmetic happens as long long
on PAE kernels.

I decided to just use 64bit arithmetic instead of open coding like
pte_modify() because gcc should generate good enough code for that now.

Signed-off-by: Andi Kleen <ak@suse.de>
Acked-by: Jan Beulich <jbeulich@novell.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: add pte_pgprot to 32-bit
Andi Kleen [Wed, 30 Jan 2008 12:33:51 +0000 (13:33 +0100)]
x86: add pte_pgprot to 32-bit

64bit already had it.

Needed for later patches.

Signed-off-by: Andi Kleen <ak@suse.de>
Acked-by: Jan Beulich <jbeulich@novell.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: shrink __PAGE_KERNEL/__PAGE_KERNEL_EXEC on non PAE kernels
Andi Kleen [Wed, 30 Jan 2008 12:33:50 +0000 (13:33 +0100)]
x86: shrink __PAGE_KERNEL/__PAGE_KERNEL_EXEC on non PAE kernels

No need to make it 64bit there.

Signed-off-by: Andi Kleen <ak@suse.de>
Acked-by: Jan Beulich <jbeulich@novell.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: cpa: remove unnecessary masking of address
Andi Kleen [Wed, 30 Jan 2008 12:33:50 +0000 (13:33 +0100)]
x86: cpa: remove unnecessary masking of address

virt_to_page does not care about the bits below the page granuality.
So don't mask them.

Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: cpa: use wbinvd() macro instead of inline assembly in 64bit c_p_a()
Andi Kleen [Wed, 30 Jan 2008 12:33:50 +0000 (13:33 +0100)]
x86: cpa: use wbinvd() macro instead of inline assembly in 64bit c_p_a()

Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: early_ioremap_init(), enhance warnings
Ingo Molnar [Wed, 30 Jan 2008 12:33:49 +0000 (13:33 +0100)]
x86: early_ioremap_init(), enhance warnings

enhance the debug warning in early_ioremap_init().

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: fix EISA ioremap
Ingo Molnar [Wed, 30 Jan 2008 12:33:49 +0000 (13:33 +0100)]
x86: fix EISA ioremap

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: fix early_ioremap()/btmap
Ingo Molnar [Wed, 30 Jan 2008 12:33:48 +0000 (13:33 +0100)]
x86: fix early_ioremap()/btmap

fix a long-standing weakness of the early-ioremap allocator: it
uses a single pgd entry for the boot mappings, and was not properly
protecting itself against crossing a 2MB (4MB) boundary.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: add early_ioremap() leak detection
Ingo Molnar [Wed, 30 Jan 2008 12:33:47 +0000 (13:33 +0100)]
x86: add early_ioremap() leak detection

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: make early_ioremap_debug early_param
Huang, Ying [Wed, 30 Jan 2008 12:33:45 +0000 (13:33 +0100)]
x86: make early_ioremap_debug early_param

This patch makes "early_ioremap_debug" a early parameter, because
"early_ioreamp/early_iounmap" is only used during early boot stage.

Signed-off-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years agox86: early_ioremap(), debugging
Ingo Molnar [Wed, 30 Jan 2008 12:33:45 +0000 (13:33 +0100)]
x86: early_ioremap(), debugging

add early_ioremap() debug printouts via the early_ioremap_debug
boot option.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>