bpf: Remove recursion prevention from rcu free callback
authorThomas Gleixner <tglx@linutronix.de>
Mon, 24 Feb 2020 14:01:39 +0000 (15:01 +0100)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Thu, 1 Oct 2020 18:40:08 +0000 (20:40 +0200)
[ Upstream commit 8a37963c7ac9ecb7f86f8ebda020e3f8d6d7b8a0 ]

If an element is freed via RCU then recursion into BPF instrumentation
functions is not a concern. The element is already detached from the map
and the RCU callback does not hold any locks on which a kprobe, perf event
or tracepoint attached BPF program could deadlock.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200224145643.259118710@linutronix.de
Signed-off-by: Sasha Levin <sashal@kernel.org>
kernel/bpf/hashtab.c

index 8648d7d29708138326f1582df5ae2dd37d95c5ec..1253261fdb3ba936a70daf85760642b5ab7f5619 100644 (file)
@@ -427,15 +427,7 @@ static void htab_elem_free_rcu(struct rcu_head *head)
        struct htab_elem *l = container_of(head, struct htab_elem, rcu);
        struct bpf_htab *htab = l->htab;
 
-       /* must increment bpf_prog_active to avoid kprobe+bpf triggering while
-        * we're calling kfree, otherwise deadlock is possible if kprobes
-        * are placed somewhere inside of slub
-        */
-       preempt_disable();
-       __this_cpu_inc(bpf_prog_active);
        htab_elem_free(htab, l);
-       __this_cpu_dec(bpf_prog_active);
-       preempt_enable();
 }
 
 static void free_htab_elem(struct bpf_htab *htab, struct htab_elem *l)