bpf, arm64: fix stack_depth tracking in combination with tail calls
authorDaniel Borkmann <daniel@iogearbox.net>
Sun, 28 Jan 2018 23:36:47 +0000 (00:36 +0100)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Wed, 31 Jan 2018 13:03:50 +0000 (14:03 +0100)
commitc43db1a3c7caf82a6d59401e617fd5c6fc0bf40d
tree40dfbb1ebcae912747f090e2c93dac724917d369
parenta17536742bb9a5df561cce54c7cc3cd1e2cd480d
bpf, arm64: fix stack_depth tracking in combination with tail calls

[ upstream commit a2284d912bfc865cdca4c00488e08a3550f9a405 ]

Using dynamic stack_depth tracking in arm64 JIT is currently broken in
combination with tail calls. In prologue, we cache ctx->stack_size and
adjust SP reg for setting up function call stack, and tearing it down
again in epilogue. Problem is that when doing a tail call, the cached
ctx->stack_size might not be the same.

One way to fix the problem with minimal overhead is to re-adjust SP in
emit_bpf_tail_call() and properly adjust it to the current program's
ctx->stack_size. Tested on Cavium ThunderX ARMv8.

Fixes: f1c9eed7f437 ("bpf, arm64: take advantage of stack_depth tracking")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
arch/arm64/net/bpf_jit_comp.c