s390/numa: move initial setup of node_to_cpumask_map
authorMartin Schwidefsky <schwidefsky@de.ibm.com>
Tue, 31 Jul 2018 14:14:18 +0000 (16:14 +0200)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Wed, 5 Sep 2018 07:20:10 +0000 (09:20 +0200)
commit fb7d7518b0d65955f91c7b875c36eae7694c69bd upstream.

The numa_init_early initcall sets the node_to_cpumask_map[0] to the
full cpu_possible_mask. Unfortunately this early_initcall is too late,
the NUMA setup for numa=emu is done even earlier. The order of calls
is numa_setup() -> emu_update_cpu_topology(), then the early_initcalls(),
followed by sched_init_domains().

Starting with git commit 051f3ca02e46432c0965e8948f00c07d8a2f09c0
"sched/topology: Introduce NUMA identity node sched domain"
the incorrect node_to_cpumask_map[0] really screws up the domain
setup and the kernel panics with the follow oops:

Cc: <stable@vger.kernel.org> # v4.15+
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
arch/s390/numa/numa.c

index f576f1073378f2ac6d5c0f0913e3974d3c768621..0dac2640c3a72cd48ea58574be646e36cc047b9d 100644 (file)
@@ -133,26 +133,14 @@ void __init numa_setup(void)
 {
        pr_info("NUMA mode: %s\n", mode->name);
        nodes_clear(node_possible_map);
+       /* Initially attach all possible CPUs to node 0. */
+       cpumask_copy(&node_to_cpumask_map[0], cpu_possible_mask);
        if (mode->setup)
                mode->setup();
        numa_setup_memory();
        memblock_dump_all();
 }
 
-/*
- * numa_init_early() - Initialization initcall
- *
- * This runs when only one CPU is online and before the first
- * topology update is called for by the scheduler.
- */
-static int __init numa_init_early(void)
-{
-       /* Attach all possible CPUs to node 0 for now. */
-       cpumask_copy(&node_to_cpumask_map[0], cpu_possible_mask);
-       return 0;
-}
-early_initcall(numa_init_early);
-
 /*
  * numa_init_late() - Initialization initcall
  *