From: Nicolas Pitre Date: Fri, 12 Jun 2009 02:09:29 +0000 (+0100) Subject: [ARM] 5545/2: add flush_kernel_dcache_page() for ARM X-Git-Tag: MMI-PSA29.97-13-9~27926^2 X-Git-Url: https://git.stricted.de/?a=commitdiff_plain;h=73be1591579084a8103a7005dd3172f3e9dd7362;p=GitHub%2FMotorolaMobilityLLC%2Fkernel-slsi.git [ARM] 5545/2: add flush_kernel_dcache_page() for ARM Without this, the default implementation is a no op which is completely wrong with a VIVT cache, and usage of sg_copy_buffer() produces unpredictable results. Tested-by: Sebastian Andrzej Siewior CC: stable@kernel.org Signed-off-by: Nicolas Pitre Signed-off-by: Russell King --- diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h index bb7d695f3900..1a711ea8418b 100644 --- a/arch/arm/include/asm/cacheflush.h +++ b/arch/arm/include/asm/cacheflush.h @@ -429,6 +429,14 @@ static inline void flush_anon_page(struct vm_area_struct *vma, __flush_anon_page(vma, page, vmaddr); } +#define ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE +static inline void flush_kernel_dcache_page(struct page *page) +{ + /* highmem pages are always flushed upon kunmap already */ + if ((cache_is_vivt() || cache_is_vipt_aliasing()) && !PageHighMem(page)) + __cpuc_flush_dcache_page(page_address(page)); +} + #define flush_dcache_mmap_lock(mapping) \ spin_lock_irq(&(mapping)->tree_lock) #define flush_dcache_mmap_unlock(mapping) \