arm64/lib: copy_page: use consistent prefetch stride
authorArd Biesheuvel <ard.biesheuvel@linaro.org>
Wed, 12 Jul 2017 14:44:14 +0000 (15:44 +0100)
committerWill Deacon <will.deacon@arm.com>
Tue, 25 Jul 2017 09:04:42 +0000 (10:04 +0100)
The optional prefetch instructions in the copy_page() routine are
inconsistent: at the start of the function, two cachelines are
prefetched beyond the one being loaded in the first iteration, but
in the loop, the prefetch is one more line ahead. This appears to
be unintentional, so let's fix it.

While at it, fix the comment style and white space.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
arch/arm64/lib/copy_page.S

index c3cd65e3181414bcaf3ec5b50937f95bf895570b..076c43715e64afe7fcc12eabd753d355ca94558f 100644 (file)
  */
 ENTRY(copy_page)
 alternative_if ARM64_HAS_NO_HW_PREFETCH
-       # Prefetch two cache lines ahead.
-       prfm    pldl1strm, [x1, #128]
-       prfm    pldl1strm, [x1, #256]
+       // Prefetch three cache lines ahead.
+       prfm    pldl1strm, [x1, #128]
+       prfm    pldl1strm, [x1, #256]
+       prfm    pldl1strm, [x1, #384]
 alternative_else_nop_endif
 
        ldp     x2, x3, [x1]
@@ -50,7 +51,7 @@ alternative_else_nop_endif
        subs    x18, x18, #128
 
 alternative_if ARM64_HAS_NO_HW_PREFETCH
-       prfm    pldl1strm, [x1, #384]
+       prfm    pldl1strm, [x1, #384]
 alternative_else_nop_endif
 
        stnp    x2, x3, [x0]