From: Linus Torvalds Date: Wed, 4 Sep 2013 18:55:10 +0000 (-0700) Subject: Merge branch 'x86-spinlocks-for-linus' of git://git.kernel.org/pub/scm/linux/kernel... X-Git-Url: https://git.stricted.de/?a=commitdiff_plain;h=816434ec4a67;p=GitHub%2FLineageOS%2Fandroid_kernel_motorola_exynos9610.git Merge branch 'x86-spinlocks-for-linus' of git://git./linux/kernel/git/tip/tip Pull x86 spinlock changes from Ingo Molnar: "The biggest change here are paravirtualized ticket spinlocks (PV spinlocks), which bring a nice speedup on various benchmarks. The KVM host side will come to you via the KVM tree" * 'x86-spinlocks-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/kvm/guest: Fix sparse warning: "symbol 'klock_waiting' was not declared as static" kvm: Paravirtual ticketlocks support for linux guests running on KVM hypervisor kvm guest: Add configuration support to enable debug information for KVM Guests kvm uapi: Add KICK_CPU and PV_UNHALT definition to uapi xen, pvticketlock: Allow interrupts to be enabled while blocking x86, ticketlock: Add slowpath logic jump_label: Split jumplabel ratelimit x86, pvticketlock: When paravirtualizing ticket locks, increment by 2 x86, pvticketlock: Use callee-save for lock_spinning xen, pvticketlocks: Add xen_nopvspin parameter to disable xen pv ticketlocks xen, pvticketlock: Xen implementation for PV ticket locks xen: Defer spinlock setup until boot CPU setup x86, ticketlock: Collapse a layer of functions x86, ticketlock: Don't inline _spin_unlock when using paravirt spinlocks x86, spinlock: Replace pv spinlocks with pv ticketlocks --- 816434ec4a674fcdb3c2221a6dffdc8f34020550 diff --cc arch/x86/include/asm/spinlock.h index e0e668422c75,d68883dd133c..bf156ded74b5 --- a/arch/x86/include/asm/spinlock.h +++ b/arch/x86/include/asm/spinlock.h @@@ -34,11 -37,31 +37,36 @@@ # define UNLOCK_LOCK_PREFIX #endif + /* How long a lock should spin before we consider blocking */ + #define SPIN_THRESHOLD (1 << 15) + + extern struct static_key paravirt_ticketlocks_enabled; + static __always_inline bool static_key_false(struct static_key *key); + + #ifdef CONFIG_PARAVIRT_SPINLOCKS + + static inline void __ticket_enter_slowpath(arch_spinlock_t *lock) + { + set_bit(0, (volatile unsigned long *)&lock->tickets.tail); + } + + #else /* !CONFIG_PARAVIRT_SPINLOCKS */ + static __always_inline void __ticket_lock_spinning(arch_spinlock_t *lock, + __ticket_t ticket) + { + } + static inline void __ticket_unlock_kick(arch_spinlock_t *lock, + __ticket_t ticket) + { + } + + #endif /* CONFIG_PARAVIRT_SPINLOCKS */ + +static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock) +{ + return lock.tickets.head == lock.tickets.tail; +} + /* * Ticket locks are conceptually two parts, one indicating the current head of * the queue, and the other indicating the current tail. The lock is acquired