nfsd regression since delayed fput()
authorAl Viro <viro@zeniv.linux.org.uk>
Sun, 20 Oct 2013 12:44:39 +0000 (08:44 -0400)
committerAl Viro <viro@zeniv.linux.org.uk>
Sun, 20 Oct 2013 12:44:39 +0000 (08:44 -0400)
Background: nfsd v[23] had throughput regression since delayed fput
went in; every read or write ends up doing fput() and we get a pair
of extra context switches out of that (plus quite a bit of work
in queue_work itselfi, apparently).  Use of schedule_delayed_work()
gives it a chance to accumulate a bit before we do __fput() on all
of them.  I'm not too happy about that solution, but... on at least
one real-world setup it reverts about 10% throughput loss we got from
switch to delayed fput.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
fs/file_table.c

index abdd15ad13c9c52bf672465787e3811c06ef7224..e900ca518635bcecb1cfdbe9d54cab640ec49333 100644 (file)
@@ -297,7 +297,7 @@ void flush_delayed_fput(void)
        delayed_fput(NULL);
 }
 
-static DECLARE_WORK(delayed_fput_work, delayed_fput);
+static DECLARE_DELAYED_WORK(delayed_fput_work, delayed_fput);
 
 void fput(struct file *file)
 {
@@ -317,7 +317,7 @@ void fput(struct file *file)
                }
 
                if (llist_add(&file->f_u.fu_llist, &delayed_fput_list))
-                       schedule_work(&delayed_fput_work);
+                       schedule_delayed_work(&delayed_fput_work, 1);
        }
 }