Fugang Duan [Tue, 16 Sep 2014 21:18:53 +0000 (05:18 +0800)]
net:fec: increase DMA queue number
when enable interrupt coalesce, 8 BD is not enough.
Signed-off-by: Frank Li <Frank.Li@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fugang Duan [Tue, 16 Sep 2014 21:18:52 +0000 (05:18 +0800)]
net: fec: add interrupt coalescence feature support
i.MX6 SX support interrupt coalescence feature
By default, init the interrupt coalescing frame count threshold and
timer threshold.
Supply the ethtool interfaces as below for user tuning to improve
enet performance:
rx_max_coalesced_frames
rx_coalesce_usecs
tx_max_coalesced_frames
tx_coalesce_usecs
Signed-off-by: Fugang Duan <B38611@freescale.com>
Signed-off-by: Frank Li <Frank.Li@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Frank Li [Tue, 16 Sep 2014 21:18:51 +0000 (05:18 +0800)]
net: fec: refine error handle of parser queue number from DT
check tx and rx queue seperately.
fix typo, "Invalidate" and "fail".
change pr_err to pr_warn.
Signed-off-by: Frank Li <Frank.Li@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Alexei Starovoitov [Tue, 16 Sep 2014 19:35:35 +0000 (12:35 -0700)]
sparc: bpf_jit: add SKF_AD_PKTTYPE support to JIT
commit
233577a22089 ("net: filter: constify detection of pkt_type_offset")
allows us to implement simple PKTTYPE support in sparc JIT
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Frank Li [Tue, 16 Sep 2014 18:34:18 +0000 (02:34 +0800)]
net: fec: fix build error at m68k platform
reproduce:
wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
git checkout
4d494cdc92b3b9a0f5fb9e1560810fa27d5a0489
make.cross ARCH=m68k m5272c3_defconfig
make.cross ARCH=m68k
drivers/net/ethernet/freescale/fec.h:262:0: warning: "FEC_R_DES_START" redefined
#define FEC_R_DES_START(X) ((X == 1) ? FEC_R_DES_START_1 : \
^
drivers/net/ethernet/freescale/fec.h:158:0: note: this is the location of the previous definition
#define FEC_R_DES_START 0x3d0 /* Receive descriptor ring */
^
drivers/net/ethernet/freescale/fec.h:265:0: warning: "FEC_X_DES_START" redefined
#define FEC_X_DES_START(X) ((X == 1) ? FEC_X_DES_START_1 : \
...
Signed-off-by: Frank Li <Frank.Li@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Tue, 16 Sep 2014 07:33:42 +0000 (00:33 -0700)]
net: sched: cls_cgroup need tcf_exts_init in all cases
This ensures the tcf_exts_init() is called for all cases.
Fixes:
952313bd62589cae216a57 ("net: sched: cls_cgroup use RCU")
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Cong Wang <cwang@twopensource.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 16 Sep 2014 20:21:48 +0000 (16:21 -0400)]
Merge branch 'net_next_ovs' of git://git./linux/kernel/git/pshelar/openvswitch
Pravin B Shelar says:
====================
Open vSwitch
Following patches adds recirculation and hash action to OVS.
First patch removes pointer to stack object. Next three patches
does code restructuring which is required for last patch.
Recirculation implementation is changed, according to comments from
David Miller, to avoid using recursive calls in OVS. It is using
queue to record recirc action and deferred recirc is executed at
the end of current actions execution.
v1-v2:
Changed subsystem name in subject to openvswitch
v2-v3:
Added patch to remove pkt_key pointer from skb->cb.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Tue, 16 Sep 2014 06:31:42 +0000 (23:31 -0700)]
net: sched: cls_fw: add missing tcf_exts_init call in fw_change()
When allocating a new structure we also need to call tcf_exts_init
to initialize exts.
A follow up patch might be in order to remove some of this code
and do tcf_exts_assign(). With this we could remove the
tcf_exts_init/tcf_exts_change pattern for some of the classifiers.
As part of the future tcf_actions RCU series this will need to be
done. For now fix the call here.
Fixes
e35a8ee5993ba81fd6c0 ("net: sched: fw use RCU")
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Cong Wang <cwang@twopensource.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Tue, 16 Sep 2014 06:31:17 +0000 (23:31 -0700)]
net: sched: cls_cgroup fix possible memory leak of 'new'
tree: git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git master
head:
54996b529ab70ca1d6f40677cd2698c4f7127e87
commit:
c7953ef23042b7c4fc2be5ecdd216aacff6df5eb [625/646] net: sched: cls_cgroup use RCU
net/sched/cls_cgroup.c:130 cls_cgroup_change() warn: possible memory leak of 'new'
net/sched/cls_cgroup.c:135 cls_cgroup_change() warn: possible memory leak of 'new'
net/sched/cls_cgroup.c:139 cls_cgroup_change() warn: possible memory leak of 'new'
Fixes:
c7953ef23042b7c4fc2be5ecdd216aac ("net: sched: cls_cgroup use RCU")
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Cong Wang <cwang@twopensource.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Tue, 16 Sep 2014 06:30:49 +0000 (23:30 -0700)]
net: sched: cls_u32 add missing rcu_assign_pointer and annotation
Add missing rcu_assign_pointer and missing annotation for ht_up
in cls_u32.c
Caught by kbuild bot,
>> net/sched/cls_u32.c:378:36: sparse: incorrect type in initializer (different address spaces)
net/sched/cls_u32.c:378:36: expected struct tc_u_hnode *ht
net/sched/cls_u32.c:378:36: got struct tc_u_hnode [noderef] <asn:4>*ht_up
>> net/sched/cls_u32.c:610:54: sparse: incorrect type in argument 4 (different address spaces)
net/sched/cls_u32.c:610:54: expected struct tc_u_hnode *ht
net/sched/cls_u32.c:610:54: got struct tc_u_hnode [noderef] <asn:4>*ht_up
>> net/sched/cls_u32.c:684:18: sparse: incorrect type in assignment (different address spaces)
net/sched/cls_u32.c:684:18: expected struct tc_u_hnode [noderef] <asn:4>*ht_up
net/sched/cls_u32.c:684:18: got struct tc_u_hnode *[assigned] ht
>> net/sched/cls_u32.c:359:18: sparse: dereference of noderef expression
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Tue, 16 Sep 2014 06:30:26 +0000 (23:30 -0700)]
net: sched: fix unsued cpu variable
kbuild test robot reported an unused variable cpu in cls_u32.c
after the patch below. This happens when PERF and MARK config
variables are disabled
Fix this is to use separate variables for perf and mark
and define the cpu variable inside the ifdef logic.
Fixes:
459d5f626da7 ("net: sched: make cls_u32 per cpu")'
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Cong Wang <cwang@twopensource.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
WANG Cong [Mon, 15 Sep 2014 23:43:43 +0000 (16:43 -0700)]
net_sched: fix a null pointer dereference in tcindex_set_parms()
This patch fixes the following crash:
[ 42.199159] BUG: unable to handle kernel NULL pointer dereference at
0000000000000018
[ 42.200027] IP: [<
ffffffff817e3fc4>] tcindex_set_parms+0x45c/0x526
[ 42.200027] PGD
d2319067 PUD
d4ffe067 PMD 0
[ 42.200027] Oops: 0000 [#1] SMP DEBUG_PAGEALLOC
[ 42.200027] CPU: 0 PID: 541 Comm: tc Not tainted 3.17.0-rc4+ #603
[ 42.200027] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
[ 42.200027] task:
ffff8800d22d2670 ti:
ffff8800ce790000 task.ti:
ffff8800ce790000
[ 42.200027] RIP: 0010:[<
ffffffff817e3fc4>] [<
ffffffff817e3fc4>] tcindex_set_parms+0x45c/0x526
[ 42.200027] RSP: 0018:
ffff8800ce793898 EFLAGS:
00010202
[ 42.200027] RAX:
0000000000000001 RBX:
ffff8800d1786498 RCX:
0000000000000000
[ 42.200027] RDX:
ffffffff82114ec8 RSI:
ffffffff82114ec8 RDI:
ffffffff82114ec8
[ 42.200027] RBP:
ffff8800ce793958 R08:
00000000000080d0 R09:
0000000000000001
[ 42.200027] R10:
ffff8800ce7939a0 R11:
0000000000000246 R12:
ffff8800d017d238
[ 42.200027] R13:
0000000000000018 R14:
ffff8800d017c6a0 R15:
ffff8800d1786620
[ 42.200027] FS:
00007f4e24539740(0000) GS:
ffff88011a600000(0000) knlGS:
0000000000000000
[ 42.200027] CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
[ 42.200027] CR2:
0000000000000018 CR3:
00000000cff38000 CR4:
00000000000006f0
[ 42.200027] Stack:
[ 42.200027]
ffff8800ce0949f0 0000000000000000 0000000200000003 ffff880000000000
[ 42.200027]
ffff8800ce7938b8 ffff8800ce7938b8 0000000600000007 0000000000000000
[ 42.200027]
ffff8800ce7938d8 ffff8800ce7938d8 0000000600000007 ffff8800ce0949f0
[ 42.200027] Call Trace:
[ 42.200027] [<
ffffffff817e4169>] tcindex_change+0xdb/0xee
[ 42.200027] [<
ffffffff817c16ca>] tc_ctl_tfilter+0x44d/0x63f
[ 42.200027] [<
ffffffff8179d161>] rtnetlink_rcv_msg+0x181/0x194
[ 42.200027] [<
ffffffff8179cf9d>] ? rtnl_lock+0x17/0x19
[ 42.200027] [<
ffffffff8179cfe0>] ? __rtnl_unlock+0x17/0x17
[ 42.200027] [<
ffffffff817ee296>] netlink_rcv_skb+0x49/0x8b
[ 43.462494] [<
ffffffff8179cfc2>] rtnetlink_rcv+0x23/0x2a
[ 43.462494] [<
ffffffff817ec8df>] netlink_unicast+0xc7/0x148
[ 43.462494] [<
ffffffff817ed413>] netlink_sendmsg+0x5cb/0x63d
[ 43.462494] [<
ffffffff810ad781>] ? mark_lock+0x2e/0x224
[ 43.462494] [<
ffffffff817757b8>] __sock_sendmsg_nosec+0x25/0x27
[ 43.462494] [<
ffffffff81778165>] sock_sendmsg+0x57/0x71
[ 43.462494] [<
ffffffff81152bbd>] ? might_fault+0x57/0xa4
[ 43.462494] [<
ffffffff81152c06>] ? might_fault+0xa0/0xa4
[ 43.462494] [<
ffffffff81152bbd>] ? might_fault+0x57/0xa4
[ 43.462494] [<
ffffffff817838fd>] ? verify_iovec+0x69/0xb7
[ 43.462494] [<
ffffffff817784f8>] ___sys_sendmsg+0x21d/0x2bb
[ 43.462494] [<
ffffffff81009db3>] ? native_sched_clock+0x35/0x37
[ 43.462494] [<
ffffffff8109ab53>] ? sched_clock_local+0x12/0x72
[ 43.462494] [<
ffffffff810ad781>] ? mark_lock+0x2e/0x224
[ 43.462494] [<
ffffffff8109ada4>] ? sched_clock_cpu+0xa0/0xb9
[ 43.462494] [<
ffffffff810aee37>] ? __lock_acquire+0x5fe/0xde4
[ 43.462494] [<
ffffffff8119f570>] ? rcu_read_lock_held+0x36/0x38
[ 43.462494] [<
ffffffff8119f75a>] ? __fcheck_files.isra.7+0x4b/0x57
[ 43.462494] [<
ffffffff8119fbf2>] ? __fget_light+0x30/0x54
[ 43.462494] [<
ffffffff81779012>] __sys_sendmsg+0x42/0x60
[ 43.462494] [<
ffffffff81779042>] SyS_sendmsg+0x12/0x1c
[ 43.462494] [<
ffffffff819d24d2>] system_call_fastpath+0x16/0x1b
'p->h' could be NULL while 'cp->h' is always update to date.
Fixes: commit
331b72922c5f58d48fd ("net: sched: RCU cls_tcindex")
Cc: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Acked-By: John Fastabend <john.r.fastabend@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
WANG Cong [Mon, 15 Sep 2014 23:43:42 +0000 (16:43 -0700)]
net_sched: fix memory leak in cls_tcindex
Fixes: commit
331b72922c5f58d48fd ("net: sched: RCU cls_tcindex")
Cc: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Acked-By: John Fastabend <john.r.fastabend@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Andy Zhou [Tue, 16 Sep 2014 02:37:25 +0000 (19:37 -0700)]
openvswitch: Add recirc and hash action.
Recirc action allows a packet to reenter openvswitch processing.
currently openvswitch lookup flow for packet received and execute
set of actions on that packet, with help of recirc action we can
process/modify the packet and recirculate it back in openvswitch
for another pass.
OVS hash action calculates 5-tupple hash and set hash in flow-key
hash. This can be used along with recirculation for distributing
packets among different ports for bond devices.
For example:
OVS bonding can use following actions:
Match on: bond flow; Action: hash, recirc(id)
Match on: recirc-id == id and hash lower bits == a;
Action: output port_bond_a
Signed-off-by: Andy Zhou <azhou@nicira.com>
Acked-by: Jesse Gross <jesse@nicira.com>
Signed-off-by: Pravin B Shelar <pshelar@nicira.com>
Andy Zhou [Tue, 16 Sep 2014 02:33:50 +0000 (19:33 -0700)]
openvswitch: simplify sample action implementation
The current sample() function implementation is more complicated
than necessary in handling single user space action optimization
and skb reference counting. There is no functional changes.
Signed-off-by: Andy Zhou <azhou@nicira.com>
Signed-off-by: Pravin B Shelar <pshelar@nicira.com>
Pravin B Shelar [Tue, 16 Sep 2014 02:28:44 +0000 (19:28 -0700)]
openvswitch: Use tun_key only for egress tunnel path.
Currently tun_key is used for passing tunnel information
on ingress and egress path, this cause confusion. Following
patch removes its use on ingress path make it egress only parameter.
Signed-off-by: Pravin B Shelar <pshelar@nicira.com>
Acked-by: Andy Zhou <azhou@nicira.com>
Pravin B Shelar [Tue, 16 Sep 2014 02:20:31 +0000 (19:20 -0700)]
openvswitch: refactor ovs flow extract API.
OVS flow extract is called on packet receive or packet
execute code path. Following patch defines separate API
for extracting flow-key in packet execute code path.
Signed-off-by: Pravin B Shelar <pshelar@nicira.com>
Acked-by: Andy Zhou <azhou@nicira.com>
Pravin B Shelar [Tue, 16 Sep 2014 02:15:28 +0000 (19:15 -0700)]
openvswitch: Remove pkt_key from OVS_CB
OVS keeps pointer to packet key in skb->cb, but the packet key is
store on stack. This could make code bit tricky. So it is better to
get rid of the pointer.
Signed-off-by: Pravin B Shelar <pshelar@nicira.com>
Florian Fainelli [Mon, 15 Sep 2014 21:48:08 +0000 (14:48 -0700)]
net: dsa: fix mii_bus to host_dev replacement
dsa_of_probe() still used cd->mii_bus instead of cd->host_dev when
building with CONFIG_OF=y. Fix this by making the replacement here as
well.
Fixes:
b4d2394d01b ("dsa: Replace mii_bus with a generic host device")
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
WANG Cong [Mon, 15 Sep 2014 21:06:49 +0000 (14:06 -0700)]
net_sched: use tcindex_filter_result_init()
Fixes: commit
331b72922c5f58d48fd ("net: sched: RCU cls_tcindex")
Cc: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
WANG Cong [Mon, 15 Sep 2014 21:06:48 +0000 (14:06 -0700)]
net_sched: fix suspicious RCU usage in tcindex_classify()
This patch fixes the following kernel warning:
[ 44.805900] [ INFO: suspicious RCU usage. ]
[ 44.808946] 3.17.0-rc4+ #610 Not tainted
[ 44.811831] -------------------------------
[ 44.814873] net/sched/cls_tcindex.c:84 suspicious rcu_dereference_check() usage!
Fixes: commit
331b72922c5f58d48fd ("net: sched: RCU cls_tcindex")
Cc: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Acked-by: John Fastabend <john.r.fastabend@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
WANG Cong [Mon, 15 Sep 2014 21:06:46 +0000 (14:06 -0700)]
net_sched: fix an allocation bug in tcindex_set_parms()
Fixes: commit
331b72922c5f58d48fd ("net: sched: RCU cls_tcindex")
Cc: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
WANG Cong [Mon, 15 Sep 2014 21:21:50 +0000 (14:21 -0700)]
net_sched: fix suspicious RCU usage in cls_bpf_classify()
Fixes: commit
1f947bf151e90ec0baad2948 ("net: sched: rcu'ify cls_bpf")
Cc: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Acked-by: John Fastabend <john.r.fastabend@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 15 Sep 2014 21:24:29 +0000 (17:24 -0400)]
Merge branch 'dsa-next'
Alexander Duyck says:
====================
DSA Cleanups
This patch series does two things, first it cleans up the tag_protocol and
protocol ops being configured seperately. Second it addresses the desire
to split DSA away from relying on a MII bus.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Mon, 15 Sep 2014 17:00:27 +0000 (13:00 -0400)]
dsa: Replace mii_bus with a generic host device
This change makes it so that instead of passing and storing a mii_bus we
instead pass and store a host_dev. From there we can test to determine the
exact type of device, and can verify it is the correct device for our switch.
So for example it would be possible to pass a device pointer from a pci_dev
and instead of checking for a PHY ID we could check for a vendor and/or device
ID.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Mon, 15 Sep 2014 17:00:19 +0000 (13:00 -0400)]
dsa: Split ops up, and avoid assigning tag_protocol and receive separately
This change addresses several issues.
First, it was possible to set tag_protocol without setting the ops pointer.
To correct that I have reordered things so that rcv is now populated before
we set tag_protocol.
Second, it didn't make much sense to keep setting the device ops each time a
new slave was registered. So by moving the receive portion out into root
switch initialization that issue should be addressed.
Third, I wanted to avoid sending tags if the rcv pointer was not registered
so I changed the tag check to verify if the rcv function pointer is set on
the root tree. If it is then we start sending DSA tagged frames.
Finally I split the device ops pointer in the structures into two spots. I
placed the rcv function pointer in the root switch since this makes it
easiest to access from there, and I placed the xmit function pointer in the
slave for the same reason.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 15 Sep 2014 21:19:55 +0000 (17:19 -0400)]
Merge branch 'bonding-cleanups'
Nikolay Aleksandrov says:
====================
bonding: style, comment and assertion changes
This is a small and simple patch-set that doesn't introduce (hopefully) any
functional changes, but only stylistic and semantic ones.
Patch 01 simply uses the already provided __rlb_next_rx_slave function inside
rlb_next_rx_slave(), thus removing the duplication of code.
Patch 02 changes all comments that I could find to netdev style, removes
some outdated ones and fixes a few more small cosmetic issues (new line
after declaration, braces around if; else and such)
Patch 03 removes one extra ASSERT_RTNL() because we already have it in the
parent function and consolidates two other ASSERT_RTNL()s to the function
that is exported and supposed to be called with RTNL anyway.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Mon, 15 Sep 2014 15:19:35 +0000 (17:19 +0200)]
bonding: consolidate ASSERT_RTNL()s and remove the unnecessary
Consolidate the calls to ASSERT_RTNL() before bond_select_active_slave()
inside bond_select_active_slave() itself and remove the ASSERT_RTNL()
from bond_hw_addr_swap() as it's not exported and its only caller -
bond_change_active_slave() already has an ASSERT_RTNL().
Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Mon, 15 Sep 2014 15:19:34 +0000 (17:19 +0200)]
bonding: trivial: style and comment fixes
First adjust a couple of locking comments that were left inaccurate,
then adjust comments to use the netdev styling and remove extra new
lines where necessary and add a couple of new lines between declarations
and code. These are all trivial styling changes, no functional change.
Also removed a couple of outdated or obvious comments.
This patch is by no means a complete fix of all netdev style violations
but it gets the bonding closer.
Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Mon, 15 Sep 2014 15:19:33 +0000 (17:19 +0200)]
bonding: consolidate the two rlb_next_rx_slave functions into one
__rlb_next_rx_slave() is a copy of rlb_next_rx_slave() with the
difference that it uses rcu primitives to walk the slave list. We don't
need the two functions and can make rlb_next_rx_slave() a wrapper for
callers which hold RTNL.
So add a comment and ASSERT_RTNL() to make sure what is intended.
Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 15 Sep 2014 18:41:12 +0000 (14:41 -0400)]
Merge branch 'tcpflags'
Eric Dumazet says:
====================
tcp: no longer keep around headers in input path
Looking at tcp_try_coalesce() I was wondering why I did :
if (tcp_hdr(from)->fin)
return false;
The answer would be to allow the aggregation, if we simply OR the FIN and PSH
flags eventually present in @from to @to packet. (Note a change is also
needed in skb_try_coalesce() to avoid calling skb_put() with 0 len)
Then, looking at tcp_recvmsg(), I realized we access tcp_hdr(skb)->syn
(and maybe tcp_hdr(skb)->fin) for every packet we process from socket
receive queue.
We have to understand TCP flags are cold in cpu caches most of the time
(assuming TCP timestamps, and that application calls recvmsg() a long
time after incoming packet was processed), and bringing a whole
cache line only to access one bit is not very nice.
It would make sense to use in TCP input path TCP_SKB_CB(skb)->tcp_flags
as we do in output path.
This saves one cache line miss, and TCP tcp_collapse() can avoid dealing
with the headers.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Mon, 15 Sep 2014 11:19:53 +0000 (04:19 -0700)]
tcp: do not copy headers in tcp_collapse()
tcp_collapse() wants to shrink skb so that the overhead is minimal.
Now we store tcp flags into TCP_SKB_CB(skb)->tcp_flags, we no longer
need to keep around full headers.
Whole available space is dedicated to the payload.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Mon, 15 Sep 2014 11:19:52 +0000 (04:19 -0700)]
tcp: allow segment with FIN in tcp_try_coalesce()
We can allow a segment with FIN to be aggregated,
if we take care to add tcp flags,
and if skb_try_coalesce() takes care of zero sized skbs.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Mon, 15 Sep 2014 11:19:51 +0000 (04:19 -0700)]
tcp: use TCP_SKB_CB(skb)->tcp_flags in input path
Input path of TCP do not currently uses TCP_SKB_CB(skb)->tcp_flags,
which is only used in output path.
tcp_recvmsg(), looks at tcp_hdr(skb)->syn for every skb found in receive queue,
and its unfortunate because this bit is located in a cache line right before
the payload.
We can simplify TCP by copying tcp flags into TCP_SKB_CB(skb)->tcp_flags.
This patch does so, and avoids the cache line miss in tcp_recvmsg()
Following patches will
- allow a segment with FIN being coalesced in tcp_try_coalesce()
- simplify tcp_collapse() by not copying the headers.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rickard Strandqvist [Sun, 14 Sep 2014 17:34:47 +0000 (19:34 +0200)]
net: ethernet: neterion: vxge: vxge-main.c: Cleaning up missing null-terminate in conjunction with strncpy
Replacing strncpy with strlcpy to avoid strings that lacks null terminate.
Signed-off-by: Rickard Strandqvist <rickard_strandqvist@spectrumdigital.se>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rickard Strandqvist [Sun, 14 Sep 2014 17:32:42 +0000 (19:32 +0200)]
net: ethernet: freescale: fec_main.c: Cleaning up missing null-terminate in conjunction with strncpy
Replacing strncpy with strlcpy to avoid strings that lacks null terminate.
Signed-off-by: Rickard Strandqvist <rickard_strandqvist@spectrumdigital.se>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fabian Frederick [Sat, 13 Sep 2014 20:38:27 +0000 (22:38 +0200)]
bna: use container_of to resolve bufdesc_ex from bufdesc
Use container_of instead of casting first structure member.
Compiled but untested.
Signed-off-by: Fabian Frederick <fabf@skynet.be>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fabian Frederick [Sat, 13 Sep 2014 20:38:26 +0000 (22:38 +0200)]
net: fec: use container_of to resolve bufdesc_ex from bufdesc
Use container_of instead of casting first structure member.
ARM cross-compiled but untested.
Signed-off-by: Fabian Frederick <fabf@skynet.be>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sasha Levin [Sat, 13 Sep 2014 04:06:30 +0000 (00:06 -0400)]
net: bpf: correctly handle errors in sk_attach_filter()
Commit "net: bpf: make eBPF interpreter images read-only" has changed bpf_prog
to be vmalloc()ed but never handled some of the errors paths of the old code.
On error within sk_attach_filter (which userspace can easily trigger), we'd
kfree() the vmalloc()ed memory, and leak the internal bpf_work_struct.
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Acked-by: Daniel Borkmann <dborkman@redhat.com>
Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Sat, 13 Sep 2014 06:12:46 +0000 (23:12 -0700)]
netdevice: Support DSA tagging when DSA is built as a module
This change corrects an error seen when DSA tagging is built as a module.
Without this change it is not possible to get XDSA tagged frames as the
test for tagging is stripped by the #ifdef check.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Bo Shen [Fri, 12 Sep 2014 23:57:49 +0000 (01:57 +0200)]
net/macb: Add hardware revision information during probe
Print the IP revision when probing.
Signed-off-by: Bo Shen <voice.shen@atmel.com>
Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 13 Sep 2014 21:32:29 +0000 (17:32 -0400)]
Merge branch 'fec-next'
Frank Li says:
====================
net: fec: imx6sx multiqueue support
These patches enable i.MX6SX multi queue support.
i.MX6SX support 3 queue and AVB feature.
Change from v3 to v4
- use "unsigned int" instead of "unsigned"
Change from v2 to v3
- fixed alignment requirement for ARM and NO-ARM platform
Change from v1 to v2.
- Change num_tx_queue to unsigned int
- Avoid block non-dt platform
- remove call netif_set_real_num_rx_queues
- seperate multi queue patch two part, one is tx and rx handle, with fixed queue 0
then other one is initilized multiqueue
- use two difference alignment for tx and rx path
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Frank Li [Fri, 12 Sep 2014 21:00:57 +0000 (05:00 +0800)]
ARM: dts: imx6sx: add multi-queue support enet
Enable 3 queues suppport for ethernet
Signed-off-by: Frank Li <Frank.Li@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Frank Li [Fri, 12 Sep 2014 21:00:56 +0000 (05:00 +0800)]
ARM: Documentation: Update fec dts binding doc
This patch update fec devicetree binding doc that add Optional
properties "fsl,num-tx-queues" and "fsl,num-rx-queues".
Signed-off-by: Fugang Duan <B38611@freescale.com>
Signed-off-by: Frank Li <Frank.Li@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fugang Duan [Fri, 12 Sep 2014 21:00:55 +0000 (05:00 +0800)]
net: fec: init complete variable in early to avoid kernel dump
Software clear the MDIO interrupt before MDIO bus access, but
MAC still generate MDIO interrupt. The issue only happen on
imx6slx chip.
CPU: 0 PID: 1 Comm: swapper/0 Not tainted
3.17.0-rc1-00399-g0bcad17 #315
Backtrace:
[<
800121fc>] (dump_backtrace) from [<
800124e0>] (show_stack+0x18/0x1c)
r6:
8096e534 r5:
8096e534 r4:
00000000 r3:
00000000
[<
800124c8>] (show_stack) from [<
806a4c60>] (dump_stack+0x8c/0xa4)
[<
806a4bd4>] (dump_stack) from [<
80060ab8>] (__lock_acquire+0x1814/0x1c40)
r6:
be078000 r5:
be074000 r4:
be03f6e4 r3:
be078000
[<
8005f2a4>] (__lock_acquire) from [<
800616e0>] (lock_acquire+0x70/0x84)
r10:
809ada33 r9:
be010600 r8:
00000096 r7:
00000001 r6:
be074000 r5:
00000000
r4:
60000193
[<
80061670>] (lock_acquire) from [<
806abb20>] (_raw_spin_lock_irqsave+0x40/0x54)
r7:
00000000 r6:
8005a3f8 r5:
00000193 r4:
be03f6d4
[<
806abae0>] (_raw_spin_lock_irqsave) from [<
8005a3f8>] (complete+0x1c/0x4c)
r6:
80950904 r5:
be03f6d0 r4:
be03f6d4
[<
8005a3dc>] (complete) from [<
8041b4c0>] (fec_enet_interrupt+0x128/0x164)
r6:
80950904 r5:
00800000 r4:
be03f000 r3:
00000000
[<
8041b398>] (fec_enet_interrupt) from [<
8006aeac>] (handle_irq_event_percpu+0x38/0x13c)
r6:
00000000 r5:
be01065c r4:
be399e00 r3:
8041b398
[<
8006ae74>] (handle_irq_event_percpu) from [<
8006aff4>] (handle_irq_event+0x44/0x64)
r10:
be03f000 r9:
80989fe0 r8:
00000000 r7:
00000096 r6:
be399e00 r5:
be01065c
r4:
be010600
[<
8006afb0>] (handle_irq_event) from [<
8006e3e8>] (handle_fasteoi_irq+0xc8/0x1bc)
r6:
8096e764 r5:
be01065c r4:
be010600 r3:
00000000
[<
8006e320>] (handle_fasteoi_irq) from [<
8006a63c>] (generic_handle_irq+0x30/0x44)
r6:
be074010 r5:
80945e4c r4:
00000096 r3:
8006e320
[<
8006a60c>] (generic_handle_irq) from [<
8000f218>] (handle_IRQ+0x54/0xbc)
r4:
80950d74 r3:
00000180
[<
8000f1c4>] (handle_IRQ) from [<
800086cc>] (gic_handle_irq+0x30/0x68)
r8:
be3ab478 r7:
c080e100 r6:
be075bd8 r5:
80950eec r4:
c080e10c r3:
000000a0
[<
8000869c>] (gic_handle_irq) from [<
80013064>] (__irq_svc+0x44/0x5c)
Signed-off-by: Fugang Duan <B38611@freescale.com>
Signed-off-by: Frank Li <Frank.Li@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fugang Duan [Fri, 12 Sep 2014 21:00:54 +0000 (05:00 +0800)]
net: fec: change FEC alignment according to i.mx6 sx requirement
i.MX6 SX change FEC alignment requirement.
i.MX6 SX change internal bus from AHB to AXI.
It require RX buffer must be 64 bytes alignment.
And remove TX buffer alignment requirement.
Signed-off-by: Fugang Duan <B38611@freescale.com>
Signed-off-by: Frank Li <Frank.Li@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fugang Duan [Fri, 12 Sep 2014 21:00:53 +0000 (05:00 +0800)]
net:fec: Add fsl,imx6sx-fec compatible strings
Add compatible string "fsl,imx6sx-fec" for i.MX6SX.
Signed-off-by: Fugang Duan <B38611@freescale.com>
Signed-off-by: Frank Li <Frank.Li@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Frank Li [Fri, 12 Sep 2014 21:00:52 +0000 (05:00 +0800)]
net: fec: add enet-avb IP support
i.MX6SX Enet-AVB support 3 tx queues, 3 rx queues.
For tx queues: ring 0 -> best effort
ring 1 -> Class A
ring 2 -> Class B
For rx queues:
ring 0 -> best effort
ring 1 -> receive VLAN packet with classification match
ring 2 -> receive VLAN packet with classification match
Add enet-avb IP multiqueue support for the driver.
Signed-off-by: Fugang Duan <B38611@freescale.com>
Signed-off-by: Frank Li <Frank.Li@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fugang Duan [Fri, 12 Sep 2014 21:00:51 +0000 (05:00 +0800)]
net:fec: Disable enet-avb MAC instead of reset MAC
For i.MX6SX enet use AXI bus, reset MAC will make system bus dead
if ENET-AXI bus has pending access (AHB bus should not have such issue).
So, disable enet with AVB MAC instead of reset MAC itself.
Signed-off-by: Fugang Duan <B38611@freescale.com>
Signed-off-by: Frank Li <Frank.Li@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Frank Li [Fri, 12 Sep 2014 21:00:50 +0000 (05:00 +0800)]
net: fec: init multi queue date structure
initilized all queues according to queue number get from DT file.
Signed-off-by: Frank Li <Frank.Li@freescale.com>
Signed-off-by: Duan Fugang <B38611@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fugang Duan [Fri, 12 Sep 2014 21:00:49 +0000 (05:00 +0800)]
net: fec: parser max queue number from dt file
By default, the tx/rx queue number is 1, user can config the queue number
at DTS file like this:
fsl,num-tx-queues=<3>;
fsl,num-rx-queues=<3>
Since i.MX6SX enet-AVB IP support multi queues, so use multi queues
interface to allocate and set up an Ethernet device.
Signed-off-by: Fugang Duan <B38611@freescale.com>
Signed-off-by: Frank Li <Frank.Li@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fugang Duan [Fri, 12 Sep 2014 21:00:48 +0000 (05:00 +0800)]
net: fec: change data structure to support multiqueue
This patch just change data structure to support multi-queue.
Only 1 queue enabled.
Ethernet multiqueue mechanism can improve performance in SMP system.
For single hw queue, multiqueue can balance cpu loading.
For multi hw queues, multiple cores can process network packets in parallel,
and refer the article for the detail advantage for multiqueue:
http://vger.kernel.org/~davem/davem_nyc09.pdf
Signed-off-by: Fugang Duan <B38611@freescale.com>
Signed-off-by: Frank Li <frank.li@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fugang Duan [Fri, 12 Sep 2014 21:00:47 +0000 (05:00 +0800)]
net:fec: add enet AVB feature macro define for imx6sx
Add enet AVB feature macro define for imx6sx.
Signed-off-by: Fugang Duan <B38611@freescale.com>
Signed-off-by: Frank Li <Frank.Li@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fugang Duan [Fri, 12 Sep 2014 21:00:46 +0000 (05:00 +0800)]
net:fec: add enet refrence clock for i.MX 6SX chip
i.MX6sx enet has below clocks for user config:
clk_ipg: ipg_clk_s, ipg_clk_mac0_s, 66Mhz
clk_ahb: enet system clock, it is enet AXI clock for imx6sx.
For imx6sx, it alos is the clock source of interrupt coalescing.
The clock range: 200Mhz ~ 266Mhz.
clk_ref: refrence clock for tx and rx. For imx6sx enet RGMII mode,
the refrence clock is 125Mhz coming from internal PLL or external.
In i.MX6sx-arm2 board, the clock is from internal PLL.
clk_ref is optional, depends on board.
clk_enet_out: The clock can be output from internal PLL. It can supply 50Mhz
clock for phy. clk_enet_out is optional, depends on chip and board.
clk_ptp: 1588 ts clock. It is optional, depends on chip.
The patch add clk_ref to distiguish the different clocks.
Signed-off-by: Fugang Duan <B38611@freescale.com>
Signed-off-by: Frank Li <Frank.Li@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Andrew Lunn [Fri, 12 Sep 2014 21:58:44 +0000 (23:58 +0200)]
net: DSA: Marvell mv88e6171 switch driver
This is the Marvell driver with some cleanups by Claudio Leite
and myself.
Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Cc: Claudio Leite <leitec@staticky.com>
Signed-off-by: Claudio Leite <leitec@staticky.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 13 Sep 2014 21:12:25 +0000 (17:12 -0400)]
Merge branch 'be2net-next'
Sathya Perla says:
====================
be2net: patch set
Patch 1 fixes some minor issues with log messages in be2net.
Patch 2 replaces strcpy() calls with strlcpy() to avoid possible buffer
overflow.
Patch 3 improves the RX buffer posting scheme for jumbo frames.
Patch 4 replaces the use of v0 of SET_FLOW_CONTROL cmd with v1 to receive
a definitive completion status from FW.
Patch 5 adds support for ethtool "-m" ethtool option.
Patch 6 fixes port-type reporting via ethtool get_settings for QSFP/SFP+
interfaces.
Patch 7 fixes the usage of MODIFY_EQD FW cmd to target a max of 8 EQs on
Lancer chip.
Patch 8 enables PCIe error reporting even for VFs.
Pls consider applying this patch set to net-next. Thanks.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Kalesh AP [Fri, 12 Sep 2014 12:09:21 +0000 (17:39 +0530)]
be2net: enable PCIe error reporting on VFs too
Currently PCIe error reporting is enabled only on PFs. This patch enables
this feature on VFs too as Lancer VFs support it.
Signed-off-by: Kalesh AP <kalesh.purayil@emulex.com>
Signed-off-by: Sathya Perla <sathya.perla@emulex.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Kalesh AP [Fri, 12 Sep 2014 12:09:20 +0000 (17:39 +0530)]
be2net: send a max of 8 EQs to be_cmd_modify_eqd() on Lancer
The MODIFY_EQ_DELAY FW cmd on Lancer is supported for a max of 8 EQs per cmd.
Signed-off-by: Kalesh AP <kalesh.purayil@emulex.com>
Signed-off-by: Sathya Perla <sathya.perla@emulex.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ravikumar Nelavelli [Fri, 12 Sep 2014 12:09:19 +0000 (17:39 +0530)]
be2net: fix port-type reporting in get_settings
Report the ethtool port-type/supported/advertising values based on the
cable_type for QSFP and SFP+ interfaces. The cable_type is parsed from
the transceiver data fetched from the FW.
Signed-off-by: Ravikumar Nelavelli <ravikumar.nelavelli@emulex.com>
Signed-off-by: Suresh Reddy <Suresh.Reddy@emulex.com>
Signed-off-by: Sathya Perla <sathya.perla@emulex.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Mark Leonard [Fri, 12 Sep 2014 12:09:18 +0000 (17:39 +0530)]
be2net: add ethtool "-m" option support
This patch adds support for the dump-module-eeprom and module-info
ethtool options.
Signed-off-by: Mark Leonard <mark.leonard@emulex.com>
Signed-off-by: Suresh Reddy <Suresh.Reddy@emulex.com>
Signed-off-by: Sathya Perla <sathya.perla@emulex.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Suresh Reddy [Fri, 12 Sep 2014 12:09:17 +0000 (17:39 +0530)]
be2net: use v1 of SET_FLOW_CONTROL command
In some configurations the FW doesn't allow changing flow control settings
of a link. Unless a v1 version of the SET_FLOW_CONTROL cmd is used, the FW
doesn't report an error to the driver.
Signed-off-by: Suresh Reddy <Suresh.Reddy@emulex.com>
Signed-off-by: Sathya Perla <sathya.perla@emulex.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ajit Khaparde [Fri, 12 Sep 2014 12:09:16 +0000 (17:39 +0530)]
be2net: fix RX fragment posting for jumbo frames
In the RX path, the driver currently consumes upto 64 (budget) packets in
one NAPI sweep. When the size of the packet received is larger than a
fragment size (2K), more than one fragment is consumed for each packet.
As the driver currently posts a max of 64 fragments, all the consumed
fragments may not be replenished. This can cause avoidable drops in RX path.
This patch fixes this by posting a max(consumed_frags, 64) frags. This is
done only when there are atleast 64 free slots in the RXQ.
Signed-off-by: Ajit Khaparde <ajit.khaparde@emulex.com>
Signed-off-by: Kalesh AP <kalesh.purayil@emulex.com>
Signed-off-by: Sathya Perla <sathya.perla@emulex.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vasundhara Volam [Fri, 12 Sep 2014 12:09:15 +0000 (17:39 +0530)]
be2net: replace strcpy with strlcpy
Replace strcpy with strlcpy, as it avoids a possible buffer overflow.
Signed-off-by: Sathya Perla <sathya.perla@emulex.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vasundhara Volam [Fri, 12 Sep 2014 12:09:14 +0000 (17:39 +0530)]
be2net: fix some log messages
This patch fixes the following minor issues with log messages in be2net:
1) Period is not required at the end of log message.
2) Remove "Unknown grp5 event" logs to reduce noise. The driver can safely
ignore async events from FW it's not interested in.
3) Reword a log message for better readability to say that SRIOV
"is disabled" rather than "not supported".
Signed-off-by: Vasundhara Volam <vasundhara.volam@emulex.com>
Signed-off-by: Sathya Perla <sathya.perla@emulex.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hannes Frederic Sowa [Fri, 12 Sep 2014 12:04:43 +0000 (14:04 +0200)]
net: filter: constify detection of pkt_type_offset
Currently we have 2 pkt_type_offset functions doing the same thing and
spread across the architecture files. Remove those and replace them
with a PKT_TYPE_OFFSET macro helper which gets the constant value from a
zero sized sk_buff member right in front of the bitfield with offsetof.
This new offset marker does not change size of struct sk_buff.
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Daniel Borkmann <dborkman@redhat.com>
Cc: Alexei Starovoitov <alexei.starovoitov@gmail.com>
Signed-off-by: Denis Kirjanov <kda@linux-powerpc.org>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Acked-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Florian Fainelli [Fri, 12 Sep 2014 04:18:09 +0000 (21:18 -0700)]
net: dsa: change tag_protocol to an enum
Now that we introduced an additional multiplexing/demultiplexing layer
with commit
3e8a72d1dae37 ("net: dsa: reduce number of protocol hooks")
that lives within the DSA code, we no longer need to have a given switch
driver tag_protocol be an actual ethertype value, instead, we can
replace it with an enum: dsa_tag_protocol.
Do this replacement in the drivers, which allows us to get rid of the
cpu_to_be16()/htons() dance, and remove ETH_P_BRCMTAG since we do not
need it anymore.
Suggested-by: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
hayeswang [Fri, 12 Sep 2014 02:43:11 +0000 (10:43 +0800)]
r8152: support VLAN
Support hw VLAN for tx and rx. And enable them by default.
Signed-off-by: Hayes Wang <hayeswang@realtek.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Wei Yongjun [Thu, 11 Sep 2014 23:12:57 +0000 (07:12 +0800)]
net: stmmac: fix return value check in socfpga_dwmac_parse_data()
In case of error, the function devm_ioremap_resource() returns
ERR_PTR() and never returns NULL. The NULL test in the return
value check should be replaced with IS_ERR().
Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Signed-off-by: David S. Miller <davem@davemloft.net>
WANG Cong [Thu, 11 Sep 2014 22:07:16 +0000 (15:07 -0700)]
ipv6: exit early in addrconf_notify() if IPv6 is disabled
If IPv6 is explicitly disabled before the interface comes up,
it makes no sense to continue when it comes up, even just
print a message.
(I am not sure about other cases though, so I prefer not to touch)
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 13 Sep 2014 20:38:53 +0000 (16:38 -0400)]
Merge branch 'ipv6-cleanups'
Cong Wang says:
====================
ipv6: clean up locking code in anycast and mcast
This patchset cleans up the locking code in anycast.c and mcast.c
and makes the refcount code more readable.
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
v1 -> v2:
* refactor some code and make it in a separated patch
* update comments
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
WANG Cong [Thu, 11 Sep 2014 22:35:16 +0000 (15:35 -0700)]
ipv6: refactor ipv6_dev_mc_inc()
Refactor out allocation and initialization and make
the refcount code more readable.
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
WANG Cong [Thu, 11 Sep 2014 22:35:15 +0000 (15:35 -0700)]
ipv6: update the comment in mcast.c
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
WANG Cong [Thu, 11 Sep 2014 22:35:14 +0000 (15:35 -0700)]
ipv6: drop some rcu_read_lock in mcast
Similarly the code is already protected by rtnl lock.
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
WANG Cong [Thu, 11 Sep 2014 22:35:13 +0000 (15:35 -0700)]
ipv6: drop ipv6_sk_mc_lock in mcast
Similarly the code is already protected by rtnl lock.
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
WANG Cong [Thu, 11 Sep 2014 22:35:12 +0000 (15:35 -0700)]
ipv6: refactor __ipv6_dev_ac_inc()
Refactor out allocation and initialization and make
the refcount code more readable.
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
WANG Cong [Thu, 11 Sep 2014 22:35:11 +0000 (15:35 -0700)]
ipv6: clean up ipv6_dev_ac_inc()
Make it accept inet6_dev, and rename it to __ipv6_dev_ac_inc()
to reflect this change.
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
WANG Cong [Thu, 11 Sep 2014 22:35:10 +0000 (15:35 -0700)]
ipv6: remove ipv6_sk_ac_lock
Just move rtnl lock up, so that the anycast list can be protected
by rtnl lock now.
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
WANG Cong [Thu, 11 Sep 2014 22:35:09 +0000 (15:35 -0700)]
ipv6: drop useless rcu_read_lock() in anycast
These code is now protected by rtnl lock, rcu read lock
is useless now.
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 13 Sep 2014 20:29:57 +0000 (16:29 -0400)]
Merge branch 'bonding-next'
Nikolay Aleksandrov says:
====================
bonding: get rid of curr_slave_lock
This is the second patch-set dealing with bond locking and the purpose here
is to convert curr_slave_lock into a spinlock called "mode_lock" which can
be used in the various modes for their specific needs. The first three
patches cleanup the use of curr_slave_lock and prepare it for the
conversion which is done in patch 4 and then the modes that were using
their own locks are converted to use the new "mode_lock" giving us the
opportunity to remove their locks.
This patch-set has been tested in each mode by running enslave/release of
slaves in parallel with traffic transmission and miimon=1 i.e. running
all the time. In fact this lead to the discovery of a subtle bug related to
RCU which will be fixed in -net.
Also did an allmodconfig test just in case :-)
v2: fix bond_3ad_state_machine_handler's use of mode_lock and
curr_slave_lock
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Thu, 11 Sep 2014 20:49:28 +0000 (22:49 +0200)]
bonding: adjust locking comments
Now that locks have been removed, remove some unnecessary comments and
adjust others to reflect reality. Also add a comment to "mode_lock" to
describe its current users and give a brief summary why they need it.
Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Thu, 11 Sep 2014 20:49:27 +0000 (22:49 +0200)]
bonding: 3ad: convert to bond->mode_lock
Now that we have bond->mode_lock, we can remove the state_machine_lock
and use it in its place. There're no fast paths requiring the per-port
spinlocks so it should be okay to consolidate them into mode_lock.
Also move it inside the unbinding function as we don't want to expose
mode_lock outside of the specific modes.
Suggested-by: Jay Vosburgh <jay.vosburgh@canonical.com>
Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Thu, 11 Sep 2014 20:49:26 +0000 (22:49 +0200)]
bonding: alb: convert to bond->mode_lock
The ALB/TLB specific spinlocks are no longer necessary as we now have
bond->mode_lock for this purpose, so convert them and remove them from
struct alb_bond_info.
Also remove the unneeded lock/unlock functions and use spin_lock/unlock
directly.
Suggested-by: Jay Vosburgh <jay.vosburgh@canonical.com>
Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Thu, 11 Sep 2014 20:49:25 +0000 (22:49 +0200)]
bonding: convert curr_slave_lock to a spinlock and rename it
curr_slave_lock is now a misleading name, a much better name is
mode_lock as it'll be used for each mode's purposes and it's no longer
necessary to use a rwlock, a simple spinlock is enough.
Suggested-by: Jay Vosburgh <jay.vosburgh@canonical.com>
Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Thu, 11 Sep 2014 20:49:24 +0000 (22:49 +0200)]
bonding: clean curr_slave_lock use
Mostly all users of curr_slave_lock already have RTNL as we've discussed
previously so there's no point in using it, the one case where the lock
must stay is the 3ad code, in fact it's the only one.
It's okay to remove it from bond_do_fail_over_mac() as it's called with
RTNL and drops the curr_slave_lock anyway.
bond_change_active_slave() is one of the main places where
curr_slave_lock was used, it's okay to remove it as all callers use RTNL
these days before calling it, that's why we move the ASSERT_RTNL() in
the beginning to catch any potential offenders to this rule.
The RTNL argument actually applies to all of the places where
curr_slave_lock has been removed from in this patch.
Also remove the unnecessary bond_deref_active_protected() macro and use
rtnl_dereference() instead.
Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Thu, 11 Sep 2014 20:49:23 +0000 (22:49 +0200)]
bonding: alb: remove curr_slave_lock
First in rlb_teach_disabled_mac_on_primary() it's okay to remove
curr_slave_lock as all callers except bond_alb_monitor() already hold
RTNL, and in case bond_alb_monitor() is executing we can at most have a
period with bad throughput (very unlikely though).
In bond_alb_monitor() it's okay to remove the read_lock as the slave
list is walked with RCU and the worst that could happen is another
transmitter at the same time and thus for a period which currently is 10
seconds (bond_alb.h: BOND_ALB_LP_TICKS).
And bond_alb_handle_active_change() is okay because it's always called
with RTNL. Removed the ASSERT_RTNL() because it'll be inserted in the
parent function in a following patch.
Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov [Thu, 11 Sep 2014 20:49:22 +0000 (22:49 +0200)]
bonding: 3ad: clean up curr_slave_lock usage
Remove the read_lock in bond_3ad_lacpdu_recv() since when the slave is
being released its rx_handler is removed before 3ad unbind, so even if
packets arrive, they won't see the slave in an inconsistent state.
Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rusty Russell [Thu, 11 Sep 2014 00:47:38 +0000 (10:17 +0930)]
virtio_ring: unify direct/indirect code paths.
virtqueue_add() populates the virtqueue descriptor table from the sgs
given. If it uses an indirect descriptor table, then it puts a single
descriptor in the descriptor table pointing to the kmalloc'ed indirect
table where the sg is populated.
Previously vring_add_indirect() did the allocation and the simple
linear layout. We replace that with alloc_indirect() which allocates
the indirect table then chains it like the normal descriptor table so
we can reuse the core logic.
This slows down pktgen by less than 1/2 a percent (which uses direct
descriptors), as well as vring_bench, but it's far neater.
vring_bench before:
1061485790-
1104800648(1.08254e+09+/-6.6e+06)ns
vring_bench after:
1125610268-
1183528965(1.14172e+09+/-8e+06)ns
pktgen before:
787781-796334(793165+/-2.4e+03)pps 365-369(367.5+/-1.2)Mb/sec (
365530384-
369498976(3.68028e+08+/-1.1e+06)bps) errors: 0
pktgen after:
779988-790404(786391+/-2.5e+03)pps 361-366(364.35+/-1.3)Mb/sec (
361914432-
366747456(3.64885e+08+/-1.2e+06)bps) errors: 0
Now, if we make force indirect descriptors by turning off any_header_sg
in virtio_net.c:
pktgen before:
713773-721062(718374+/-2.1e+03)pps 331-334(332.95+/-0.92)Mb/sec (
331190672-
334572768(3.33325e+08+/-9.6e+05)bps) errors: 0
pktgen after:
710542-719195(714898+/-2.4e+03)pps 329-333(331.15+/-1.1)Mb/sec (
329691488-
333706480(3.31713e+08+/-1.1e+06)bps) errors: 0
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rusty Russell [Thu, 11 Sep 2014 00:47:37 +0000 (10:17 +0930)]
virtio_ring: assume sgs are always well-formed.
We used to have several callers which just used arrays. They're
gone, so we can use sg_next() everywhere, simplifying the code.
On my laptop, this slowed down vring_bench by 15%:
vring_bench before:
936153354-
967745359(9.44739e+08+/-6.1e+06)ns
vring_bench after:
1061485790-
1104800648(1.08254e+09+/-6.6e+06)ns
However, a more realistic test using pktgen on a AMD FX(tm)-8320 saw
a few percent improvement:
pktgen before:
767390-792966(785159+/-6.5e+03)pps 356-367(363.75+/-2.9)Mb/sec (
356068960-
367936224(3.64314e+08+/-3e+06)bps) errors: 0
pktgen after:
787781-796334(793165+/-2.4e+03)pps 365-369(367.5+/-1.2)Mb/sec (
365530384-
369498976(3.68028e+08+/-1.1e+06)bps) errors: 0
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rusty Russell [Thu, 11 Sep 2014 00:47:36 +0000 (10:17 +0930)]
virtio_net: pass well-formed sgs to virtqueue_add_*()
This is the only driver which doesn't hand virtqueue_add_inbuf and
virtqueue_add_outbuf a well-formed, well-terminated sg. Fix it,
so we can make virtio_add_* simpler.
pktgen results:
modprobe pktgen
echo 'add_device eth0' > /proc/net/pktgen/kpktgend_0
echo nowait 1 > /proc/net/pktgen/eth0
echo count
1000000 > /proc/net/pktgen/eth0
echo clone_skb 100000 > /proc/net/pktgen/eth0
echo dst_mac 4e:14:25:a9:30:ac > /proc/net/pktgen/eth0
echo dst 192.168.1.2 > /proc/net/pktgen/eth0
for i in `seq 20`; do echo start > /proc/net/pktgen/pgctrl; tail -n1 /proc/net/pktgen/eth0; done
Before:
746547-793084(786421+/-9.6e+03)pps 346-367(364.4+/-4.4)Mb/sec (
346397808-
367990976(3.649e+08+/-4.5e+06)bps) errors: 0
After:
767390-792966(785159+/-6.5e+03)pps 356-367(363.75+/-2.9)Mb/sec (
356068960-
367936224(3.64314e+08+/-3e+06)bps) errors: 0
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 13 Sep 2014 16:43:24 +0000 (12:43 -0400)]
Merge branch 'master' of git://git./linux/kernel/git/jkirsher/net-next
Jeff Kirsher says:
====================
Intel Wired LAN Driver Updates 2014-09-12
This series contains updates to e1000, ixgbe and ixgbevf.
Mark provide two fixes to reduce compile warnings produce by ixgbe
and ixgbevf.
Alex provides two patches for ixgbe, first removes the receive buffer
allocation at the end of the ixgbe_clean_rx_irq(). The reason for
removing this is to avoid the extra latency introduced by the MMIO write.
Second patch addresses several issues in the current ixgbe implementation
of busy poll sockets. It was possible for frames to be delivered out of
order if they were held in GRO, so address this by flushing the GRO
buffers before releasing the q_vector back to the idle state. Also, we
were having to take a spinlock on changing the state to and from idle,
so to resolve this, replaced the state value with an atomic and use
atomic_cmpxchg to change the value from idle, and a simple atomic set
to restore it back to idle after we have acquired it. This allows us
to only use a locked operation on acquiring the vector without a need
for a locked operation to release it.
Florian Westphal provides several patches for e1000 which does some
cleanup and updating of the driver. Moved e1000_tbi_adjust_stats()
so that he could make the function static. Added a helper function
to deal with the tbi workaround that was located in 2 different
Rx clean functions. Added a e1000_rx_buffer struct for use on receive
since the transmit and receive have different requirements. Updates
e1000 to use napi_gro_frags API.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 13 Sep 2014 16:30:33 +0000 (12:30 -0400)]
Merge branch 'sched_rcu'
John Fastabend says:
====================
net/sched rcu classifiers and tcf
This series converts the tcf_proto usage to RCU.
This requires updating each classifier individually to handle the
new copy/update requirement and also to update the core list
traversals. This makes the assumption that updates to the tables
are infrequent in comparison to the packet per second being
classified. On a 10Gbps running near line rate we can easily
produce 12+ million packets per second so IMO this is a reasonable
assumption. The updates are serialized by RTNL.
I have done some basic testing on this series and do not see any
immediate splats or issues. The patch series has been running
on my dev systems for a month or so now and I've not seen any
issues. Although my configurations are not overly complicated.
My test cases at this point cover all the filters with a
tight loop to add/remove filters. Some basic estimator tests
where I add an estimator to the qdisc and verify the statistics
accurate using pktgen. And finally I have a small script to
exercise the 'tc actions' interface. Feel free to send me more
tests off list and I can run them.
This is prep work to drop the qdisc lock with the first
target being the ingress qdisc. To be done is making the
tc actions RCU safe and statistics per cpu. These patches
are in the works.
Comments:
- Checkpatch is still giving errors on some >80 char lines I know
about this. IMO the way to fix this is to restructure the sched
code to avoid being so heavily indented. But doing this here
bloats the patchset and anyways there are already lots of >80
chars in these files. I would prefer to keep the patches as is
but let me know if others think I should fix these and I will.
A follow up patch set could restructure the code and fix this
throughout the code blocks.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Sat, 13 Sep 2014 03:10:24 +0000 (20:10 -0700)]
net: sched: rcu'ify cls_bpf
This patch makes the cls_bpf classifier RCU safe. The tcf_lock
was being used to protect a list of cls_bpf_prog now this list
is RCU safe and updates occur with rcu_replace.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Sat, 13 Sep 2014 03:09:49 +0000 (20:09 -0700)]
net: sched: rcu'ify cls_rsvp
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Sat, 13 Sep 2014 03:09:16 +0000 (20:09 -0700)]
net: sched: make cls_u32 lockless
Make cls_u32 classifier safe to run without holding lock. This patch
converts statistics that are kept in read section u32_classify into
per cpu counters.
This patch was tested with a tight u32 filter add/delete loop while
generating traffic with pktgen. By running pktgen on vlan devices
created on top of a physical device we can hit the qdisc layer
correctly. For ingress qdisc's a loopback cable was used.
for i in {1..100}; do
q=`echo $i%8|bc`;
echo -n "u32 tos: iteration $i on queue $q";
tc filter add dev p3p2 parent $p prio $i u32 match ip tos 0x10 0xff \
action skbedit queue_mapping $q;
sleep 1;
tc filter del dev p3p2 prio $i;
echo -n "u32 tos hash table: iteration $i on queue $q";
tc filter add dev p3p2 parent $p protocol ip prio $i handle 628: u32 divisor 1
tc filter add dev p3p2 parent $p protocol ip prio $i u32 \
match ip protocol 17 0xff link 628: offset at 0 mask 0xf00 shift 6 plus 0
tc filter add dev p3p2 parent $p protocol ip prio $i u32 \
ht 628:0 match ip tos 0x10 0xff action skbedit queue_mapping $q
sleep 2;
tc filter del dev p3p2 prio $i
sleep 1;
done
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Sat, 13 Sep 2014 03:08:47 +0000 (20:08 -0700)]
net: sched: make cls_u32 per cpu
This uses per cpu counters in cls_u32 in preparation
to convert over to rcu.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Sat, 13 Sep 2014 03:08:20 +0000 (20:08 -0700)]
net: sched: RCU cls_tcindex
Make cls_tcindex RCU safe.
This patch addds a new RCU routine rcu_dereference_bh_rtnl() to check
caller either holds the rcu read lock or RTNL. This is needed to
handle the case where tcindex_lookup() is being called in both cases.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Sat, 13 Sep 2014 03:07:50 +0000 (20:07 -0700)]
net: sched: RCU cls_route
RCUify the route classifier. For now however spinlock's are used to
protect fastmap cache.
The issue here is the fastmap may be read by one CPU while the
cache is being updated by another. An array of pointers could be
one possible solution.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Sat, 13 Sep 2014 03:07:22 +0000 (20:07 -0700)]
net: sched: fw use RCU
RCU'ify fw classifier.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Sat, 13 Sep 2014 03:06:55 +0000 (20:06 -0700)]
net: sched: cls_flow use RCU
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Sat, 13 Sep 2014 03:06:26 +0000 (20:06 -0700)]
net: sched: cls_cgroup use RCU
Make cgroup classifier safe for RCU.
Also drops the calls in the classify routine that were doing a
rcu_read_lock()/rcu_read_unlock(). If the rcu_read_lock() isn't held
entering this routine we have issues with deleting the classifier
chain so remove the unnecessary rcu_read_lock()/rcu_read_unlock()
pair noting all paths AFAIK hold rcu_read_lock.
If there is a case where classify is called without the rcu read lock
then an rcu splat will occur and we can correct it.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>