omap4: l2x0: enable instruction and data prefetching
Enabling L2 prefetching improves performance as shown on Panda
ES2.1 board with mem test, and it has measurable impact on
performances. I think we should consider it, even though it damages
"writes" a bit. (rebased to k.org)
Usually the prefetch is used at both levels together L1 + L2, however,
to enable the CP15 prefetch engines, these are under security, and on
GP devices, we cannot enable it(e.g. on PandaBoard). However, just
enabling PL310 prefetch seems to provide performance improvement,
as shown in the data below (from Ubuntu) and would be a great thing
to pull in.
What prefetch does is enable automatic next line prefetching. With this
enabled, whenever the PL310 receives a cachable read request, it
automatically prefetches the following cache line as well.
Measurement Data:
==
STOCK 10.10 WITHOUT PATCH
========================
~# ./memspeed
size
8388608 8192k 8M
offset
8388608, 0
buffers 0x2aaad000 0x2b2ad000
copy libc 133 MB/s
copy Android v5 273 MB/s
copy Android NEON 235 MB/s
copy INT32 116 MB/s
copy ASM ARM 187 MB/s
copy ASM VLDM 64 204 MB/s
copy ASM VLDM 128 173 MB/s
copy ASM VLD1 216 MB/s
read ASM ARM 286 MB/s
read ASM VLDM 242 MB/s
read ASM VLD1 286 MB/s
write libc 1947 MB/s
write ASM ARM 1943 MB/s
write ASM VSTM 1942 MB/s
write ASM VST1 1935 MB/s
10.10 + PATCH
=============
~# ./memspeed
size
8388608 8192k 8M
offset
8388608, 0
buffers 0x2ab17000 0x2b317000
copy libc 129 MB/s
copy Android v5 256 MB/s
copy Android NEON 356 MB/s
copy INT32 127 MB/s
copy ASM ARM 321 MB/s
copy ASM VLDM 64 337 MB/s
copy ASM VLDM 128 321 MB/s
copy ASM VLD1 350 MB/s
read ASM ARM 496 MB/s
read ASM VLDM 470 MB/s
read ASM VLD1 488 MB/s
write libc 1701 MB/s
write ASM ARM 1682 MB/s
write ASM VSTM 1693 MB/s
write ASM VST1 1681 MB/s
Signed-off-by: Mans Rullgard <mans@mansr.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Nishanth Menon <nm@ti.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>