arm64: Remove redundant mov from LL/SC cmpxchg
authorRobin Murphy <robin.murphy@arm.com>
Fri, 12 May 2017 12:48:41 +0000 (13:48 +0100)
committerCatalin Marinas <catalin.marinas@arm.com>
Mon, 15 May 2017 17:30:10 +0000 (18:30 +0100)
commit8df728e1ae614f592961e51f65d3e3212ede5a75
treedf8284b2f7d13200752c5afbbbd98474d256af8c
parent2ea659a9ef488125eb46da6eb571de5eae5c43f6
arm64: Remove redundant mov from LL/SC cmpxchg

The cmpxchg implementation introduced by commit c342f78217e8 ("arm64:
cmpxchg: patch in lse instructions when supported by the CPU") performs
an apparently redundant register move of [old] to [oldval] in the
success case - it always uses the same register width as [oldval] was
originally loaded with, and is only executed when [old] and [oldval] are
known to be equal anyway.

The only effect it seemingly does have is to take up a surprising amount
of space in the kernel text, as removing it reveals:

   text    data     bss     dec     hex filename
12426658 1348614 4499749 18275021 116dacd vmlinux.o.new
12429238 1348614 4499749 18277601 116e4e1 vmlinux.o.old

Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
arch/arm64/include/asm/atomic_ll_sc.h