rhashtable: Fix potential crash on destroy in rhashtable_shrink
authorHerbert Xu <herbert@gondor.apana.org.au>
Tue, 3 Feb 2015 20:33:22 +0000 (07:33 +1100)
committerDavid S. Miller <davem@davemloft.net>
Thu, 5 Feb 2015 04:34:52 +0000 (20:34 -0800)
The current being_destroyed check in rhashtable_expand is not
enough since if we start a shrinking process after freeing all
elements in the table that's also going to crash.

This patch adds a being_destroyed check to the deferred worker
thread so that we bail out as soon as we take the lock.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
lib/rhashtable.c

index c41e21096373156ab78cb5279280234e2dcb656d..904b419b72f513564d020cdc02c39a6b2110b321 100644 (file)
@@ -487,6 +487,9 @@ static void rht_deferred_worker(struct work_struct *work)
 
        ht = container_of(work, struct rhashtable, run_work);
        mutex_lock(&ht->mutex);
+       if (ht->being_destroyed)
+               goto unlock;
+
        tbl = rht_dereference(ht->tbl, ht);
 
        if (ht->p.grow_decision && ht->p.grow_decision(ht, tbl->size))
@@ -494,6 +497,7 @@ static void rht_deferred_worker(struct work_struct *work)
        else if (ht->p.shrink_decision && ht->p.shrink_decision(ht, tbl->size))
                rhashtable_shrink(ht);
 
+unlock:
        mutex_unlock(&ht->mutex);
 }