Provide a means to trap usages of per_cpu map variables before
they are setup. Define CONFIG_DEBUG_PER_CPU_MAPS to activate.
Signed-off-by: Mike Travis <travis@sgi.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
This results in a large slowdown, but helps to find certain types
of memory corruptions.
+config DEBUG_PER_CPU_MAPS
+ bool "Debug access to per_cpu maps"
+ depends on DEBUG_KERNEL
+ depends on X86_64_SMP
+ default n
+ help
+ Say Y to verify that the per_cpu map being accessed has
+ been setup. Adds a fair amount of code to kernel memory
+ and decreases performance.
+
+ Say N if unsure.
+
config DEBUG_RODATA
bool "Write protect kernel read-only data structures"
depends on DEBUG_KERNEL
void *x86_cpu_to_node_map_early_ptr;
DEFINE_PER_CPU(u16, x86_cpu_to_node_map) = NUMA_NO_NODE;
EXPORT_PER_CPU_SYMBOL(x86_cpu_to_node_map);
+#ifdef CONFIG_DEBUG_PER_CPU_MAPS
+EXPORT_SYMBOL(x86_cpu_to_node_map_early_ptr);
+#endif
u16 apicid_to_node[MAX_LOCAL_APIC] __cpuinitdata = {
[0 ... MAX_LOCAL_APIC-1] = NUMA_NO_NODE
static inline int cpu_to_node(int cpu)
{
+#ifdef CONFIG_DEBUG_PER_CPU_MAPS
+ if(x86_cpu_to_node_map_early_ptr) {
+ printk("KERN_NOTICE cpu_to_node(%d): usage too early!\n",
+ (int)cpu);
+ BUG();
+ }
+#endif
if(per_cpu_offset(cpu))
return per_cpu(x86_cpu_to_node_map, cpu);
else