mlx5e: don't assume anything on the irq affinity mappings of the device
authorSagi Grimberg <sagi@grimberg.me>
Thu, 13 Jul 2017 08:09:39 +0000 (11:09 +0300)
committerDoug Ledford <dledford@redhat.com>
Tue, 8 Aug 2017 18:53:05 +0000 (14:53 -0400)
mlx5e currently assumes that irq affinity is really spread first
irq vectors across device home node cpus, with the new generic affinity
mappings this is no longer the case, hence mlxe should not rely on
this anymore.

Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Doug Ledford <dledford@redhat.com>
drivers/net/ethernet/mellanox/mlx5/core/en_main.c

index d0e572df3a1b3d5deba0b7a2dddb63e58e8e16a0..2c4e41833e5512f481c61eeb1dd983959dbf88dd 100644 (file)
@@ -3793,18 +3793,8 @@ void mlx5e_build_default_indir_rqt(struct mlx5_core_dev *mdev,
                                   u32 *indirection_rqt, int len,
                                   int num_channels)
 {
-       int node = mdev->priv.numa_node;
-       int node_num_of_cores;
        int i;
 
-       if (node == -1)
-               node = first_online_node;
-
-       node_num_of_cores = cpumask_weight(cpumask_of_node(node));
-
-       if (node_num_of_cores)
-               num_channels = min_t(int, num_channels, node_num_of_cores);
-
        for (i = 0; i < len; i++)
                indirection_rqt[i] = i % num_channels;
 }