We noticed a drop in n/w performance due to the irq_desc being cacheline
aligned rather than internode aligned. We see 50% of expected performance
when two e1000 nics local to two different nodes have consecutive irq
descriptors allocated, due to false sharing.
Note that this patch does away with cacheline padding for the UP case, as
it does not seem useful for UP configurations.
Signed-off-by: Ravikiran Thirumalai <kiran@scalex86.org>
Signed-off-by: Shai Fultheim <shai@scalex86.org>
Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* @dir: /proc/irq/ procfs entry
* @affinity_entry: /proc/irq/smp_affinity procfs entry on SMP
* @name: flow handler name for /proc/interrupts output
- *
- * Pad this out to 32 bytes for cache and indexing reasons.
*/
struct irq_desc {
irq_flow_handler_t handle_irq;
struct proc_dir_entry *dir;
#endif
const char *name;
-} ____cacheline_aligned;
+} ____cacheline_internodealigned_in_smp;
extern struct irq_desc irq_desc[NR_IRQS];
*
* Controller mappings for all interrupt sources:
*/
-struct irq_desc irq_desc[NR_IRQS] __cacheline_aligned = {
+struct irq_desc irq_desc[NR_IRQS] __cacheline_aligned_in_smp = {
[0 ... NR_IRQS-1] = {
.status = IRQ_DISABLED,
.chip = &no_irq_chip,