From e22553e2a25ed3f2a9c874088e0f20cdcd97c7b0 Mon Sep 17 00:00:00 2001 From: Chris Mason Date: Tue, 17 Feb 2015 13:46:07 -0800 Subject: [PATCH] eventfd: don't take the spinlock in eventfd_poll The spinlock in eventfd_poll is trying to protect the count of events so it can decide if it should return POLLIN, POLLERR, or POLLOUT. But, because of the way we drop the lock after calling poll_wait, and drop it again before returning, we have the same pile of races with the lock as we do with a single read of ctx->count(). This replaces the lock with a read barrier and single read. eventfd_write does a single bump of ctx->count, so this should not add new races with adding events. eventfd_read is similar, it will do a single decrement with the lock held, and so we're making the race with concurrent readers slightly larger. This spinlock is the top CPU user in kernel code during one of our workloads. Removing it gives us a ~2% boost. [arnd@arndb.de: avoid unused variable warning] [dan.carpenter@oracle.com: type bug in eventfd_poll()] Signed-off-by: Chris Mason Cc: Davide Libenzi Signed-off-by: Arnd Bergmann Signed-off-by: Dan Carpenter Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- fs/eventfd.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/fs/eventfd.c b/fs/eventfd.c index 4b0a226024fa..8d0c0df01854 100644 --- a/fs/eventfd.c +++ b/fs/eventfd.c @@ -118,18 +118,18 @@ static unsigned int eventfd_poll(struct file *file, poll_table *wait) { struct eventfd_ctx *ctx = file->private_data; unsigned int events = 0; - unsigned long flags; + u64 count; poll_wait(file, &ctx->wqh, wait); + smp_rmb(); + count = ctx->count; - spin_lock_irqsave(&ctx->wqh.lock, flags); - if (ctx->count > 0) + if (count > 0) events |= POLLIN; - if (ctx->count == ULLONG_MAX) + if (count == ULLONG_MAX) events |= POLLERR; - if (ULLONG_MAX - 1 > ctx->count) + if (ULLONG_MAX - 1 > count) events |= POLLOUT; - spin_unlock_irqrestore(&ctx->wqh.lock, flags); return events; } -- 2.20.1