GitHub/moto-9609/android_kernel_motorola_exynos9610.git
8 years agobatman-adv: Convert batadv_tt_orig_list_entry to kref
Sven Eckelmann [Sat, 16 Jan 2016 09:29:50 +0000 (10:29 +0100)]
batman-adv: Convert batadv_tt_orig_list_entry to kref

batman-adv uses a self-written reference implementation which is just based
on atomic_t. This is less obvious when reading the code than kref and
therefore increases the change that the reference counting will be missed.

Signed-off-by: Sven Eckelmann <sven@narfation.org>
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
Signed-off-by: Antonio Quartulli <a@unstable.cc>
8 years agobatman-adv: Convert batadv_tvlv_handler to kref
Sven Eckelmann [Sat, 16 Jan 2016 09:29:49 +0000 (10:29 +0100)]
batman-adv: Convert batadv_tvlv_handler to kref

batman-adv uses a self-written reference implementation which is just based
on atomic_t. This is less obvious when reading the code than kref and
therefore increases the change that the reference counting will be missed.

Signed-off-by: Sven Eckelmann <sven@narfation.org>
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
Signed-off-by: Antonio Quartulli <a@unstable.cc>
8 years agobatman-adv: Convert batadv_tvlv_container to kref
Sven Eckelmann [Sat, 16 Jan 2016 09:29:48 +0000 (10:29 +0100)]
batman-adv: Convert batadv_tvlv_container to kref

batman-adv uses a self-written reference implementation which is just based
on atomic_t. This is less obvious when reading the code than kref and
therefore increases the change that the reference counting will be missed.

Signed-off-by: Sven Eckelmann <sven@narfation.org>
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
Signed-off-by: Antonio Quartulli <a@unstable.cc>
8 years agobatman-adv: Convert batadv_dat_entry to kref
Sven Eckelmann [Sat, 16 Jan 2016 09:29:47 +0000 (10:29 +0100)]
batman-adv: Convert batadv_dat_entry to kref

batman-adv uses a self-written reference implementation which is just based
on atomic_t. This is less obvious when reading the code than kref and
therefore increases the change that the reference counting will be missed.

Signed-off-by: Sven Eckelmann <sven@narfation.org>
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
Signed-off-by: Antonio Quartulli <a@unstable.cc>
8 years agobatman-adv: Convert batadv_nc_path to kref
Sven Eckelmann [Sat, 16 Jan 2016 09:29:46 +0000 (10:29 +0100)]
batman-adv: Convert batadv_nc_path to kref

batman-adv uses a self-written reference implementation which is just based
on atomic_t. This is less obvious when reading the code than kref and
therefore increases the change that the reference counting will be missed.

Signed-off-by: Sven Eckelmann <sven@narfation.org>
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
Signed-off-by: Antonio Quartulli <a@unstable.cc>
8 years agobatman-adv: Convert batadv_nc_node to kref
Sven Eckelmann [Sat, 16 Jan 2016 09:29:45 +0000 (10:29 +0100)]
batman-adv: Convert batadv_nc_node to kref

batman-adv uses a self-written reference implementation which is just based
on atomic_t. This is less obvious when reading the code than kref and
therefore increases the change that the reference counting will be missed.

Signed-off-by: Sven Eckelmann <sven@narfation.org>
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
Signed-off-by: Antonio Quartulli <a@unstable.cc>
8 years agobatman-adv: Convert batadv_bla_claim to kref
Sven Eckelmann [Sat, 16 Jan 2016 09:29:44 +0000 (10:29 +0100)]
batman-adv: Convert batadv_bla_claim to kref

batman-adv uses a self-written reference implementation which is just based
on atomic_t. This is less obvious when reading the code than kref and
therefore increases the change that the reference counting will be missed.

Signed-off-by: Sven Eckelmann <sven@narfation.org>
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
Signed-off-by: Antonio Quartulli <a@unstable.cc>
8 years agobatman-adv: Convert batadv_bla_backbone_gw to kref
Sven Eckelmann [Sat, 16 Jan 2016 09:29:43 +0000 (10:29 +0100)]
batman-adv: Convert batadv_bla_backbone_gw to kref

batman-adv uses a self-written reference implementation which is just based
on atomic_t. This is less obvious when reading the code than kref and
therefore increases the change that the reference counting will be missed.

Signed-off-by: Sven Eckelmann <sven@narfation.org>
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
Signed-off-by: Antonio Quartulli <a@unstable.cc>
8 years agobatman-adv: Convert batadv_softif_vlan to kref
Sven Eckelmann [Sat, 16 Jan 2016 09:29:42 +0000 (10:29 +0100)]
batman-adv: Convert batadv_softif_vlan to kref

batman-adv uses a self-written reference implementation which is just based
on atomic_t. This is less obvious when reading the code than kref and
therefore increases the change that the reference counting will be missed.

Signed-off-by: Sven Eckelmann <sven@narfation.org>
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
Signed-off-by: Antonio Quartulli <a@unstable.cc>
8 years agobatman-adv: Convert batadv_gw_node to kref
Sven Eckelmann [Sat, 16 Jan 2016 09:29:41 +0000 (10:29 +0100)]
batman-adv: Convert batadv_gw_node to kref

batman-adv uses a self-written reference implementation which is just based
on atomic_t. This is less obvious when reading the code than kref and
therefore increases the change that the reference counting will be missed.

Signed-off-by: Sven Eckelmann <sven@narfation.org>
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
Signed-off-by: Antonio Quartulli <a@unstable.cc>
8 years agobatman-adv: Convert batadv_hardif_neigh_node to kref
Sven Eckelmann [Sat, 16 Jan 2016 09:29:40 +0000 (10:29 +0100)]
batman-adv: Convert batadv_hardif_neigh_node to kref

batman-adv uses a self-written reference implementation which is just based
on atomic_t. This is less obvious when reading the code than kref and
therefore increases the change that the reference counting will be missed.

Signed-off-by: Sven Eckelmann <sven@narfation.org>
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
Signed-off-by: Antonio Quartulli <a@unstable.cc>
8 years agobatman-adv: Add lockdep assert for container_list_lock
Sven Eckelmann [Sun, 20 Dec 2015 08:04:03 +0000 (09:04 +0100)]
batman-adv: Add lockdep assert for container_list_lock

The batadv_tvlv_container* functions state in their kernel-doc that they
require tvlv.container_list_lock. Add an assert to automatically detect
when this might have been ignored by the caller.

Signed-off-by: Sven Eckelmann <sven@narfation.org>
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
Signed-off-by: Antonio Quartulli <a@unstable.cc>
8 years agobatman-adv: add seqno maximum age and protection start flag parameters
Simon Wunderlich [Mon, 23 Nov 2015 18:57:22 +0000 (19:57 +0100)]
batman-adv: add seqno maximum age and protection start flag parameters

To allow future use of the window protected function with different
maximum sequence numbers, add a parameter to set this value which
was previously hardcoded. Another parameter added for future use is a
flag to return whether the protection window has started.

While at it, also fix the kerneldoc.

Signed-off-by: Simon Wunderlich <simon@open-mesh.com>
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
Signed-off-by: Antonio Quartulli <a@unstable.cc>
8 years agobatman-adv: Drop reference to netdevice on last reference
Sven Eckelmann [Tue, 5 Jan 2016 11:06:26 +0000 (12:06 +0100)]
batman-adv: Drop reference to netdevice on last reference

The references to the network device should be dropped inside the release
function for batadv_hard_iface similar to what is done with the batman-adv
internal datastructures.

Signed-off-by: Sven Eckelmann <sven@narfation.org>
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
Signed-off-by: Antonio Quartulli <a@unstable.cc>
8 years agosxgbe: remove unused code
Jean Sacren [Wed, 10 Feb 2016 03:47:17 +0000 (20:47 -0700)]
sxgbe: remove unused code

Remove the unused code of sxgbe_xpcs.

Reported-by: Julia Lawall <julia.lawall@lip6.fr>
Suggested-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Jean Sacren <sakiwit@gmail.com>
Cc: Byungho An <bh74.an@samsung.com>
Cc: Girish K S <ks.giri@samsung.com>
Link: http://lkml.kernel.org/r/alpine.DEB.2.10.1601191918470.2531@hadrien
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'renesas-bit-twiddling'
David S. Miller [Wed, 10 Feb 2016 10:38:19 +0000 (05:38 -0500)]
Merge branch 'renesas-bit-twiddling'

Sergei Shtylyov says:

====================
Factor out register bit twiddling in the Renesas Ethernet drivers

   Here's a set of 2 patches against DaveM's 'net-next.git' repo. We factor out
the often repeated pattern of reading a register, AND'ing and/or OR'ing some
bits, and then writing the value back.

[1/2] ravb: factor out register bit twiddling code
[2/2] sh_eth: factor out register bit twiddling code
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agosh_eth: factor out register bit twiddling code
Sergei Shtylyov [Tue, 9 Feb 2016 22:38:28 +0000 (01:38 +0300)]
sh_eth: factor out register bit twiddling code

The  driver has often repeated pattern of reading a register,  AND'ing and/or
OR'ing some bits  and writing  the  value back. Factor the pattern out into
sh_eth_modify() -- this saves  84 bytes of code with ARM gcc 4.7.3.

While at it, update Cogent Embedded's copyright.

Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoravb: factor out register bit twiddling code
Sergei Shtylyov [Tue, 9 Feb 2016 22:37:44 +0000 (01:37 +0300)]
ravb: factor out register bit twiddling code

The  driver has often repeated pattern of reading a register,  AND'ing and/or
OR'ing some bits  and writing  the  value back. Factor the pattern out into
ravb_modify() -- this saves 260 bytes of code with ARM gcc 4.7.3.

While at it, update Cogent Embedded's copyrights.

Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'tpacket-gso-csum-offload'
David S. Miller [Tue, 9 Feb 2016 11:43:59 +0000 (06:43 -0500)]
Merge branch 'tpacket-gso-csum-offload'

Willem de Bruijn says:

====================
packet: tpacket gso and csum offload

Extend PACKET_VNET_HDR socket option support to packet sockets with
memory mapped rings.

Patches 2 and 4 add support to tpacket_rcv and tpacket_snd.

Patch 1 prepares for this by moving the relevant virtio_net_hdr
logic out of packet_snd and packet_rcv into helper functions.

GSO transmission requires all headers in the skb linear section.
Patch 3 moves parsing of tx_ring slot headers before skb allocation
to enable allocation with sufficient linear size.

Changes
  v1->v2:
    - fix bounds checks:
      - subtract sizeof(vnet_hdr) before comparing tp_len to size_max
      - compare tp_len to size_max also with GSO, just do not truncate to MTU
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agopacket: tpacket_snd gso and checksum offload
Willem de Bruijn [Wed, 3 Feb 2016 23:02:17 +0000 (18:02 -0500)]
packet: tpacket_snd gso and checksum offload

Support socket option PACKET_VNET_HDR together with PACKET_TX_RING.

When enabled, a struct virtio_net_hdr is expected to precede the data
in the ring. The vnet option must be set before the ring is created.

The implementation reuses the existing skb_copy_bits code that is used
when dev->hard_header_len is non-zero. Move this ll_header check to
before the skb alloc and combine it with a test for vnet_hdr->hdr_len.
Allocate and copy the max of the two.

Verified with test program at
github.com/wdebruij/kerneltools/blob/master/tests/psock_txring_vnet.c

Signed-off-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agopacket: parse tpacket header before skb alloc
Willem de Bruijn [Wed, 3 Feb 2016 23:02:16 +0000 (18:02 -0500)]
packet: parse tpacket header before skb alloc

GSO packet headers must be stored in the linear skb segment.
Move tpacket header parsing before sock_alloc_send_skb. The GSO
follow-on patch will later increase the skb linear argument to
sock_alloc_send_skb if needed for large packets.

The header parsing code does not require an allocated skb, so is
safe to move. Later pass to tpacket_fill_skb the computed data
start and length.

Signed-off-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agopacket: vnet_hdr support for tpacket_rcv
Willem de Bruijn [Wed, 3 Feb 2016 23:02:15 +0000 (18:02 -0500)]
packet: vnet_hdr support for tpacket_rcv

Support socket option PACKET_VNET_HDR together with PACKET_RX_RING.
When enabled, a struct virtio_net_hdr will precede the data in the
packet ring slots.

Verified with test program at
github.com/wdebruij/kerneltools/blob/master/tests/psock_rxring_vnet.c

  pkt: 1454269209.798420 len=5066
  vnet: gso_type=tcpv4 gso_size=1448 hlen=66 ecn=off
  csum: start=34 off=16
  eth: proto=0x800
  ip: src=<masked> dst=<masked> proto=6 len=5052

Signed-off-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agopacket: move vnet_hdr code to helper functions
Willem de Bruijn [Wed, 3 Feb 2016 23:02:14 +0000 (18:02 -0500)]
packet: move vnet_hdr code to helper functions

packet_snd and packet_rcv support virtio net headers for GSO.
Move this logic into helper functions to be able to reuse it in
tpacket_snd and tpacket_rcv.

This is a straighforward code move with one exception. Instead of
creating and passing a separate gso_type variable, reuse
vnet_hdr.gso_type after conversion from virtio to kernel gso type.

Signed-off-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agobonding: 3ad: apply ad_actor settings changes immediately
Nikolay Aleksandrov [Wed, 3 Feb 2016 12:17:01 +0000 (13:17 +0100)]
bonding: 3ad: apply ad_actor settings changes immediately

Currently the bonding allows to set ad_actor_system and prio while the
bond device is down, but these are actually applied only if there aren't
any slaves yet (applied to bond device when first slave shows up, and to
slaves at 3ad bind time). After this patch changes are applied immediately
and the new values can be used/seen after the bond's upped so it's not
necessary anymore to release all and enslave again to see the changes.

CC: Jay Vosburgh <j.vosburgh@gmail.com>
CC: Veaceslav Falico <vfalico@gmail.com>
CC: Andy Gospodarek <gospo@cumulusnetworks.com>
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: Jay Vosburgh <jay.vosburgh@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'bridge-mdb-entry-offload-flag'
David S. Miller [Tue, 9 Feb 2016 09:42:55 +0000 (04:42 -0500)]
Merge branch 'bridge-mdb-entry-offload-flag'

Jiri Pirko says:

====================
bridge: mdb: flag offloaded mdb entries

This patchset extends uapi to let the user know if an mdb entry is offloaded.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agobridge: mdb: Passing the port-group pointer to br_mdb module
Elad Raz [Wed, 3 Feb 2016 08:57:06 +0000 (09:57 +0100)]
bridge: mdb: Passing the port-group pointer to br_mdb module

Passing the port-group to br_mdb in order to allow direct access to the
structure. br_mdb will later use the structure to reflect HW reflection
status via "state" variable.

Signed-off-by: Elad Raz <eladr@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agobridge: mdb: Separate br_mdb_entry->state from net_bridge_port_group->state
Elad Raz [Wed, 3 Feb 2016 08:57:05 +0000 (09:57 +0100)]
bridge: mdb: Separate br_mdb_entry->state from net_bridge_port_group->state

Change net_bridge_port_group 'state' member to 'flags' and define new set
of flags internal to the kernel.

Signed-off-by: Elad Raz <eladr@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agobridge: mdb: add support for offloaded mdb entries
Elad Raz [Wed, 3 Feb 2016 08:57:04 +0000 (09:57 +0100)]
bridge: mdb: add support for offloaded mdb entries

Add new bitmask member 'flags' to br_mdb_entry structure. Adding
MDB_FLAGS_OFFLOAD bit which indicates MDB entries is offloaded to hardware.

Signed-off-by: Elad Raz <eladr@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agobonding: trivial: style fixes
Zhang Shengju [Wed, 3 Feb 2016 02:02:32 +0000 (02:02 +0000)]
bonding: trivial: style fixes

remove some redudant brackets, use sizeof(*) instead of sizeof(struct x).

Signed-off-by: Zhang Shengju <zhangshengju@cmss.chinamobile.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agotcp: Fix syncookies sysctl default.
David S. Miller [Mon, 8 Feb 2016 09:24:33 +0000 (04:24 -0500)]
tcp: Fix syncookies sysctl default.

Unintentionally the default was changed to zero, fix
that.

Fixes: 12ed8244ed ("ipv4: Namespaceify tcp syncookies sysctl knob")
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'ns-tcp-sysctls'
David S. Miller [Sun, 7 Feb 2016 19:36:21 +0000 (14:36 -0500)]
Merge branch 'ns-tcp-sysctls'

Nikolay Borisov says:

====================
Namespaceify more of the tcp sysctl knobs

This patch series continues making more of the tcp-related
sysctl knobs be per net-namespace. Most of these apply per
socket and have global defaults so should be safe and I
don't expect any breakages.

Having those per net-namespace is useful when multiple
containers are hosted and it is required to tune the
tcp settings for each independently of the host node.

I've split the patches to be per-sysctl but after
the review if the outcome is positive I'm happy
to either send it in one big blob or just.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoipv4: Namespaceify tcp_notsent_lowat sysctl knob
Nikolay Borisov [Wed, 3 Feb 2016 07:46:57 +0000 (09:46 +0200)]
ipv4: Namespaceify tcp_notsent_lowat sysctl knob

Signed-off-by: Nikolay Borisov <kernel@kyup.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoipv4: Namespaceify tcp_fin_timeout sysctl knob
Nikolay Borisov [Wed, 3 Feb 2016 07:46:56 +0000 (09:46 +0200)]
ipv4: Namespaceify tcp_fin_timeout sysctl knob

Signed-off-by: Nikolay Borisov <kernel@kyup.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoipv4: Namespaceify tcp_orphan_retries sysctl knob
Nikolay Borisov [Wed, 3 Feb 2016 07:46:55 +0000 (09:46 +0200)]
ipv4: Namespaceify tcp_orphan_retries sysctl knob

Signed-off-by: Nikolay Borisov <kernel@kyup.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoipv4: Namespaceify tcp_retries2 sysctl knob
Nikolay Borisov [Wed, 3 Feb 2016 07:46:54 +0000 (09:46 +0200)]
ipv4: Namespaceify tcp_retries2 sysctl knob

Signed-off-by: Nikolay Borisov <kernel@kyup.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoipv4: Namespaceify tcp_retries1 sysctl knob
Nikolay Borisov [Wed, 3 Feb 2016 07:46:53 +0000 (09:46 +0200)]
ipv4: Namespaceify tcp_retries1 sysctl knob

Signed-off-by: Nikolay Borisov <kernel@kyup.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoipv4: Namespaceify tcp reordering sysctl knob
Nikolay Borisov [Wed, 3 Feb 2016 07:46:52 +0000 (09:46 +0200)]
ipv4: Namespaceify tcp reordering sysctl knob

Signed-off-by: Nikolay Borisov <kernel@kyup.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoipv4: Namespaceify tcp syncookies sysctl knob
Nikolay Borisov [Wed, 3 Feb 2016 07:46:51 +0000 (09:46 +0200)]
ipv4: Namespaceify tcp syncookies sysctl knob

Signed-off-by: Nikolay Borisov <kernel@kyup.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoipv4: Namespaceify tcp synack retries sysctl knob
Nikolay Borisov [Wed, 3 Feb 2016 07:46:50 +0000 (09:46 +0200)]
ipv4: Namespaceify tcp synack retries sysctl knob

Signed-off-by: Nikolay Borisov <kernel@kyup.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoipv4: Namespaceify tcp syn retries sysctl knob
Nikolay Borisov [Wed, 3 Feb 2016 07:46:49 +0000 (09:46 +0200)]
ipv4: Namespaceify tcp syn retries sysctl knob

Signed-off-by: Nikolay Borisov <kernel@kyup.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge tag 'batman-adv-for-davem' of git://git.open-mesh.org/linux-merge
David S. Miller [Sun, 7 Feb 2016 19:32:50 +0000 (14:32 -0500)]
Merge tag 'batman-adv-for-davem' of git://git.open-mesh.org/linux-merge

Antonio Quartulli says:

====================
This batch of patches includes a number of corrections and
improvements for our kernel-doc. These changes also make sure
that our doc is now properly processed by the kernel-doc
parsing tool.

Other than that you have a patch updating all the copyright
lines to 2016 and another patch switching the URLs in our
readme, Kconfig and MAINTAINERS file from "http" to "https".
Both by Sven Eckelmann.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'virtio_net_ethtool_settings'
David S. Miller [Sun, 7 Feb 2016 19:30:55 +0000 (14:30 -0500)]
Merge branch 'virtio_net_ethtool_settings'

Nikolay Aleksandrov says:

====================
virtio_net: add ethtool get/set settings support

Patch 1 adds ethtool speed/duplex validation functions which check if the
value is defined. Patch 2 adds support for ethtool (get|set)_settings and
uses the validation functions to check the user-supplied values.

v2: split in 2 patches to allow everyone to make use of the validation
functions and allow virtio_net devices to be half duplex
v3: added a check to return error if the user tries to change anything else
besides duplex/speed as per Michael's comment
v4: Set port type to PORT_OTHER
v5: clear diff1.port (ignore port) when checking for changes since we set
it now and ethtool uses it in the set request

Sorry about the pointless iterations, should've all covered now.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agovirtio_net: add ethtool support for set and get of settings
Nikolay Aleksandrov [Wed, 3 Feb 2016 03:04:37 +0000 (04:04 +0100)]
virtio_net: add ethtool support for set and get of settings

This patch allows the user to set and retrieve speed and duplex of the
virtio_net device via ethtool. Having this functionality is very helpful
for simulating different environments and also enables the virtio_net
device to participate in operations where proper speed and duplex are
required (e.g. currently bonding lacp mode requires full duplex). Custom
speed and duplex are not allowed, the user-supplied settings are validated
before applying.

Example:
$ ethtool eth1
Settings for eth1:
...
Speed: Unknown!
Duplex: Unknown! (255)
$ ethtool -s eth1 speed 1000 duplex full
$ ethtool eth1
Settings for eth1:
...
Speed: 1000Mb/s
Duplex: Full

Based on a patch by Roopa Prabhu.

Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoethtool: add speed/duplex validation functions
Nikolay Aleksandrov [Wed, 3 Feb 2016 03:04:36 +0000 (04:04 +0100)]
ethtool: add speed/duplex validation functions

Add functions which check if the speed/duplex are defined.

Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'sunvnet-tracepoints'
David S. Miller [Sun, 7 Feb 2016 19:13:12 +0000 (14:13 -0500)]
Merge branch 'sunvnet-tracepoints'

Sowmini Varadhan says:

====================
sunvnet: perf tracepoint hooks

Added some perf tracepoints to help track and debug sunvnet
descriptor state at run-time.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agosunvnet: perf tracepoint invocations to trace LDC state machine
Sowmini Varadhan [Tue, 2 Feb 2016 18:41:56 +0000 (10:41 -0800)]
sunvnet: perf tracepoint invocations to trace LDC state machine

Use sunvnet perf trace macros to monitor LDC message exchange state.

Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agosunvnet: Add support for perf LDC event tracing
Sowmini Varadhan [Tue, 2 Feb 2016 18:41:55 +0000 (10:41 -0800)]
sunvnet: Add support for perf LDC event tracing

Add perf event macros for support of tracing and instrumentation
of LDC state machine

Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'tcp_cong_ctrl_refactoring'
David S. Miller [Sun, 7 Feb 2016 19:09:59 +0000 (14:09 -0500)]
Merge branch 'tcp_cong_ctrl_refactoring'

Yuchung Cheng says:

====================
tcp: congestion control refactoring

This patch set refactors the sequence of congestion control,
loss recovery, and transmission logic in TCP ack processing.

The design goal is to decouple and sequence them in the following order:

  0. ACK accounting: free or tag sent packets [unchanged]

  1. loss recovery: identify lost/ecn packets and update congestion state

  2. congestion control: up/down cwnd and pacing rate based on (1)

  3. transmission: send new or retransmit old based on (1) and (2)

This refactoring makes the cwnd changes more clear because it's done
in one place. The packet accounting is also more robust especially
for connections that do not support SACK. Patch 1-4 and 6 are
refactoring and patch 5 improves TCP performance under reordering.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agotcp: tcp_cong_control helper
Yuchung Cheng [Tue, 2 Feb 2016 18:33:09 +0000 (10:33 -0800)]
tcp: tcp_cong_control helper

Refactor and consolidate cwnd and rate updates into a new function
tcp_cong_control().

Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Eric Dumazet <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agotcp: make congestion control more robust against reordering
Yuchung Cheng [Tue, 2 Feb 2016 18:33:08 +0000 (10:33 -0800)]
tcp: make congestion control more robust against reordering

This change enables congestion control to update cwnd based on
not only packet cumulatively acked but also packets delivered
out-of-order. This makes congestion control robust against packet
reordering because it may raise cwnd as long as packets are being
delivered once reordering has been detected (i.e., it only cares
the amount of packets delivered, not the ordering among them).

Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Eric Dumazet <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agotcp: refactor pkts acked accounting
Yuchung Cheng [Tue, 2 Feb 2016 18:33:07 +0000 (10:33 -0800)]
tcp: refactor pkts acked accounting

A small refactoring that gets number of packets cumulatively acked
from tcp_clean_rtx_queue() directly.

Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Eric Dumazet <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agotcp: new delivery accounting
Yuchung Cheng [Tue, 2 Feb 2016 18:33:06 +0000 (10:33 -0800)]
tcp: new delivery accounting

This patch changes the accounting of how many packets are
newly acked or sacked when the sender receives an ACK.

The current approach basically computes

   newly_acked_sacked = (prior_packets - prior_sacked) -
                        (tp->packets_out - tp->sacked_out)

   where prior_packets and prior_sacked out are snapshot
   at the beginning of the ACK processing.

The new approach tracks the delivery information via a new
TCP state variable "delivered" which monotically increases
as new packets are delivered in order or out-of-order.

The reason for this change is that the current approach is
brittle that produces negative or inaccurate estimate.

   1) For non-SACK connections, an ACK that advances the SND.UNA
   could reset the DUPACK counters (tp->sacked_out) in
   tcp_process_loss() or tcp_fastretrans_alert(). This inflates
   the inflight suddenly and causes under-estimate or even
   negative estimate. Here is a real example:

                   before   after (processing ACK)
   packets_out     75       73
   sacked_out      23        0
   ca state        Loss     Open

   The old approach computes (75-23) - (73 - 0) = -21 delivered
   while the new approach computes 1 delivered since it
   considers the 2nd-24th packets are delivered OOO.

   2) MSS change would re-count packets_out and sacked_out so
   the estimate is in-accurate and can even become negative.
   E.g., the inflight is doubled when MSS is halved.

   3) Spurious retransmission signaled by DSACK is not accounted

The new approach is simpler and more robust. For SACK connections,
tp->delivered increments as packets are being acked or sacked in
SACK and ACK processing.

For non-sack connections, it's done in tcp_remove_reno_sacks() and
tcp_add_reno_sack(). When an ACK advances the SND.UNA, tp->delivered
is incremented by the number of packets ACKed (less the current
number of DUPACKs received plus one packet hole).  Upon receiving
a DUPACK, tp->delivered is incremented assuming one out-of-order
packet is delivered.

Upon receiving a DSACK, tp->delivered is incremtened assuming one
retransmission is delivered in tcp_sacktag_write_queue().

Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Eric Dumazet <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agotcp: move cwnd reduction after recovery state procesing
Yuchung Cheng [Tue, 2 Feb 2016 18:33:05 +0000 (10:33 -0800)]
tcp: move cwnd reduction after recovery state procesing

Currently the cwnd is reduced and increased in various different
places. The reduction happens in various places in the recovery
state processing (tcp_fastretrans_alert) while the increase
happens afterward.

A better sequence is to identify lost packets and update
the congestion control state (icsk_ca_state) first. Then base
on the new state, up/down the cwnd in one central place. It's
more clear to reason cwnd changes.

Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Eric Dumazet <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agotcp: retransmit after recovery processing and congestion control
Yuchung Cheng [Tue, 2 Feb 2016 18:33:04 +0000 (10:33 -0800)]
tcp: retransmit after recovery processing and congestion control

The retransmission and F-RTO transmission currently happen inside
recovery state processing (tcp_fastretrans_alert) but before
congestion control.  This refactoring moves the logic after both
s.t. we can determine how much to send (cwnd) before deciding what to
send.

Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Eric Dumazet <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet: drop write-only stack variable
David Herrmann [Tue, 2 Feb 2016 17:17:54 +0000 (18:17 +0100)]
net: drop write-only stack variable

Remove a write-only stack variable from unix_attach_fds(). This is a
left-over from the security fix in:

    commit 712f4aad406bb1ed67f3f98d04c044191f0ff593
    Author: willy tarreau <w@1wt.eu>
    Date:   Sun Jan 10 07:54:56 2016 +0100

        unix: properly account for FDs passed over unix sockets

Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet: Add support for fill_slave_info to VRF device
David Ahern [Tue, 2 Feb 2016 15:43:45 +0000 (07:43 -0800)]
net: Add support for fill_slave_info to VRF device

Allows userspace to have direct access to VRF table association
versus looking up master device and its table.

Signed-off-by: David Ahern <dsa@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoxen-netback: implement dynamic multicast control
Paul Durrant [Tue, 2 Feb 2016 11:55:05 +0000 (11:55 +0000)]
xen-netback: implement dynamic multicast control

My recent patch to the Xen Project documents a protocol for 'dynamic
multicast control' in netif.h. This extends the previous multicast control
protocol to not require a shared ring reconnection to turn the feature off.
Instead the backend watches the "request-multicast-control" key in xenstore
and turns the feature off if the key value is written to zero.

This patch adds support for dynamic multicast control in xen-netback.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'be2net-non-critical-fixes'
David S. Miller [Sun, 7 Feb 2016 18:55:29 +0000 (13:55 -0500)]
Merge branch 'be2net-non-critical-fixes'

Sriharsha Basavapatna says:

====================
be2net patch-set

v2 changes:
Patch-4: Changed a tab to space in be.h
Patches-6,7,8: Updated commit log summary line: benet --> be2net

Hi David,

The following patch set contains a few non-critical bug fixes. Please
consider applying this to the net-next tree. Thanks.

Patch-1 fixes be_set_phys_id() ethtool function to return an error code.
Patch-2 fixes a warning when some commands fail for VFs.
Patch-3 fixes be_vlan_rem_vid() to verify vlan being removed is in the list.
Patch-4 improves SRIOV queue distribution logic.
Patch-5 avoids running self test on VFs.
Patch-6 fixes error recovery in Lancer to clean up after moving to ready state.
Patch-7 adds retry logic to error recovery in case of recovery failures
Patch-8 fixes time interval used in eq delay computation routine
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agobe2net: Fix interval calculation in interrupt moderation
Padmanabh Ratnakar [Wed, 3 Feb 2016 04:19:23 +0000 (09:49 +0530)]
be2net: Fix interval calculation in interrupt moderation

Interrupt moderation parameters need to be recalculated only
after a time interval of 1 ms. Interval calculation is wrong
when there is a rollover of jiffies. Using recommended way of interval
calculation using jiffies to fix this.

Signed-off-by: Padmanabh Ratnakar <padmanabh.ratnakar@broadcom.com>
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agobe2net: Add retry in case of error recovery failure
Padmanabh Ratnakar [Wed, 3 Feb 2016 04:19:22 +0000 (09:49 +0530)]
be2net: Add retry in case of error recovery failure

Retry error recovery MAX_ERR_RECOVERY_RETRY_COUNT times in case of
failure during error recovery.

Signed-off-by: Padmanabh Ratnakar <padmanabh.ratnakar@broadcom.com>
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agobe2net: Fix Lancer error recovery
Padmanabh Ratnakar [Wed, 3 Feb 2016 04:19:21 +0000 (09:49 +0530)]
be2net: Fix Lancer error recovery

After error is detected, wait for adapter to move to ready state
before destroying queues and cleanup of other resources. Also
skip performing any cleanup for non-Lancer chips and move debug
messages to correct routine.

Signed-off-by: Padmanabh Ratnakar <padmanabh.ratnakar@broadcom.com>
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agobe2net: Don't run ethtool self-tests for VFs
Somnath Kotur [Wed, 3 Feb 2016 04:19:20 +0000 (09:49 +0530)]
be2net: Don't run ethtool self-tests for VFs

The CMD_SUBSYSTEM_LOWLEVEL cmds need DEV_CFG Privilege to run
which VFs don't have by default.
Self-tests need to be issued only for PFs.

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agobe2net: SRIOV Queue distribution should factor in EQ-count of VFs
Sriharsha Basavapatna [Wed, 3 Feb 2016 04:19:19 +0000 (09:49 +0530)]
be2net: SRIOV Queue distribution should factor in EQ-count of VFs

The SRIOV resource distribution logic for RX/TX queue counts is not optimal
when a small number of VFs are enabled. It does not take into account the
VF's EQ count while computing the queue counts. Because of this, the VF
gets a large number of queues, though it doesn't have sufficient EQs,
resulting in wasted queue resources. And the PF gets a smaller share of
queues though it has more EQs. Fix this by capping the VF queue count at
its EQ count.

Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agobe2net: Fix be_vlan_rem_vid() to check vlan id being removed
Sriharsha Basavapatna [Wed, 3 Feb 2016 04:19:18 +0000 (09:49 +0530)]
be2net: Fix be_vlan_rem_vid() to check vlan id being removed

The driver decrements its vlan count without checking if it is really
present in its list. This results in an invalid vlan count and impacts
subsequent vlan add/rem ops. The function be_vlan_rem_vid() should be
updated to fix this.

Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agobe2net: check for INSUFFICIENT_PRIVILEGES error
Suresh Reddy [Wed, 3 Feb 2016 04:19:17 +0000 (09:49 +0530)]
be2net: check for INSUFFICIENT_PRIVILEGES error

The driver currently logs the message "VF is not privileged to issue
opcode" by checking only the base_status field for UNAUTHORIZED_REQUEST.
Add check to look for INSUFFICIENT_PRIVILEGES in the additional status
field also as not all cmds fail with that base status.

Signed-off-by: Suresh Reddy <suresh.reddy@broadcom.com>
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agobe2net: return error status from be_set_phys_id()
Suresh Reddy [Wed, 3 Feb 2016 04:19:16 +0000 (09:49 +0530)]
be2net: return error status from be_set_phys_id()

be_set_phys_id() returns 0 to ethtool when the command fails in the FW.

This patch fixes the set_phys_id() to return -EIO in case the FW cmd fails.

Signed-off-by: Suresh Reddy <suresh.reddy@broadcom.com>
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'vxlan-headers-and-tx-cleanups'
David S. Miller [Sun, 7 Feb 2016 18:51:07 +0000 (13:51 -0500)]
Merge branch 'vxlan-headers-and-tx-cleanups'

Jiri Benc says:

====================
vxlan: cleanup headers and tx path

This is first part of vxlan cleanup. It makes vxlan.h better organized and
removes code duplication in the tx path. There's no change in functionality.

This is in preparation for VXLAN-GPE support. Other sets will follow with
cleanup of rx path, modifications of vxlan interface setup and
cleanup/modification of fdb handling. The goal of these patchsets is to
consolidate VXLAN extension handling (right now it's scattered all around
the code) and allow vxlan to operate in L3 mode (i.e. without inner Ethernet
headers).

The full picture (all 30 patches) can be seen at:
https://github.com/jbenc/linux-vxlan/commits/master

Changelog:
v2: added patch 5/6, fixing the udp sum problem spotted by Paolo Abeni
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agovxlan: consolidate vxlan_xmit_skb and vxlan6_xmit_skb
Jiri Benc [Tue, 2 Feb 2016 17:09:16 +0000 (18:09 +0100)]
vxlan: consolidate vxlan_xmit_skb and vxlan6_xmit_skb

There's a lot of code duplication. Factor out the duplicate code to a new
function shared between IPv4 and IPv6 xmit path.

Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agovxlan: consolidate csum flag handling
Jiri Benc [Tue, 2 Feb 2016 17:09:15 +0000 (18:09 +0100)]
vxlan: consolidate csum flag handling

The flag for tx checksumming for tunneling over IPv4 and IPv6 is different.
Decide whether to do tx checksumming in vxlan_xmit_one and pass it on as
a separate flag. This will allow for tx path consolidation in the next
patch.

Unfortunately, gcc is not clever enough to see that udp_sum is always
initialized and gives an uninitialized variable warning. Set it to false to
silence the warning.

Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agovxlan: consolidate output route calculation
Jiri Benc [Tue, 2 Feb 2016 17:09:14 +0000 (18:09 +0100)]
vxlan: consolidate output route calculation

The code for output route lookup is duplicated for ndo_start_xmit and
ndo_fill_metadata_dst. Move it to a common function.

Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agovxlan: restructure vxlan.h definitions
Jiri Benc [Tue, 2 Feb 2016 17:09:13 +0000 (18:09 +0100)]
vxlan: restructure vxlan.h definitions

RCO and GBP are VXLAN extensions, not specified in RFC 7348. Because of
that, they need to be explicitly enabled when creating vxlan interface. By
default, those extensions are not used and plain VXLAN header is sent and
received.

Reflect this in vxlan.h: first, the plain VXLAN header is defined. Following
it, RCO is documented and defined, and likewise for GBP.

Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agovxlan: remove duplicated macros
Jiri Benc [Tue, 2 Feb 2016 17:09:12 +0000 (18:09 +0100)]
vxlan: remove duplicated macros

VNI_HASH_BITS and VNI_HASH_SIZE are defined twice. Remove the extra
definitions.

Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agovxlan: cleanup types
Jiri Benc [Tue, 2 Feb 2016 17:09:11 +0000 (18:09 +0100)]
vxlan: cleanup types

include/net/vxlan.h is a kernel header, no need to prefix fixed size types
with double underscore.

Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agotcp: fastopen: call tcp_fin() if FIN present in SYNACK
Eric Dumazet [Sat, 6 Feb 2016 19:16:28 +0000 (11:16 -0800)]
tcp: fastopen: call tcp_fin() if FIN present in SYNACK

When we acknowledge a FIN, it is not enough to ack the sequence number
and queue the skb into receive queue. We also have to call tcp_fin()
to properly update socket state and send proper poll() notifications.

It seems we also had the problem if we received a SYN packet with the
FIN flag set, but it does not seem an urgent issue, as no known
implementation can do that.

Fixes: 61d2bcae99f6 ("tcp: fastopen: accept data/FIN present in SYNACK message")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'tipc-topology-updates'
David S. Miller [Sat, 6 Feb 2016 08:42:09 +0000 (03:42 -0500)]
Merge branch 'tipc-topology-updates'

Parthasarathy Bhuvaragan says:

====================
tipc: cleanups, fixes & improvements for topology server

This series contains topology server cleanups, fixes and improvements.

Cleanups in #1-#4:
We remove duplicate data structures and aligin the rest of the code accordingly.

Fixes in #5-#8:
The bugs occur either during configuration or while running on SMP targets,
which are race conditions that pop up under different situations.

Improvements in #9-#10:
Updates to decrease timer usage and improve readability.

v2: Updated commit message in patch 6 based on feedback from
    Sergei Shtylyov sergei.shtylyov@cogentembedded.com
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agotipc: use alloc_ordered_workqueue() instead of WQ_UNBOUND w/ max_active = 1
Parthasarathy Bhuvaragan [Tue, 2 Feb 2016 09:52:17 +0000 (10:52 +0100)]
tipc: use alloc_ordered_workqueue() instead of WQ_UNBOUND w/ max_active = 1

Until now, tipc_rcv and tipc_send workqueues in server are allocated
with parameters WQ_UNBOUND & max_active = 1.
This parameters passed to this function makes it equivalent to
alloc_ordered_workqueue(). The later form is more explicit and
can inherit future ordered_workqueue changes.

In this commit we replace alloc_workqueue() with more readable
alloc_ordered_workqueue().

Acked-by: Ying Xue <ying.xue@windriver.com>
Reviewed-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: Parthasarathy Bhuvaragan <parthasarathy.bhuvaragan@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agotipc: donot create timers if subscription timeout = TIPC_WAIT_FOREVER
Parthasarathy Bhuvaragan [Tue, 2 Feb 2016 09:52:16 +0000 (10:52 +0100)]
tipc: donot create timers if subscription timeout = TIPC_WAIT_FOREVER

Until now, we create timers even for the subscription requests
with timeout = TIPC_WAIT_FOREVER.
This can be improved by avoiding timer creation when the timeout
is set to TIPC_WAIT_FOREVER.

In this commit, we introduce a check to creates timers only
when timeout != TIPC_WAIT_FOREVER.

Acked-by: Ying Xue <ying.xue@windriver.com>
Reviewed-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: Parthasarathy Bhuvaragan <parthasarathy.bhuvaragan@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agotipc: protect tipc_subscrb_get() with subscriber spin lock
Parthasarathy Bhuvaragan [Tue, 2 Feb 2016 09:52:15 +0000 (10:52 +0100)]
tipc: protect tipc_subscrb_get() with subscriber spin lock

Until now, during subscription creation the mod_time() &
tipc_subscrb_get() are called after releasing the subscriber
spin lock.

In a SMP system when performing a subscription creation, if the
subscription timeout occurs simultaneously (the timer is
scheduled to run on another CPU) then the timer thread
might decrement the subscribers refcount before the create
thread increments the refcount.

This can be simulated by creating subscription with timeout=0 and
sometimes the timeout occurs before the create request is complete.
This leads to the following message:
[30.702949] BUG: spinlock bad magic on CPU#1, kworker/u8:3/87
[30.703834] general protection fault: 0000 [#1] SMP
[30.704826] CPU: 1 PID: 87 Comm: kworker/u8:3 Not tainted 4.4.0-rc8+ #18
[30.704826] Workqueue: tipc_rcv tipc_recv_work [tipc]
[30.704826] task: ffff88003f878600 ti: ffff88003fae0000 task.ti: ffff88003fae0000
[30.704826] RIP: 0010:[<ffffffff8109196c>]  [<ffffffff8109196c>] spin_dump+0x5c/0xe0
[...]
[30.704826] Call Trace:
[30.704826]  [<ffffffff81091a16>] spin_bug+0x26/0x30
[30.704826]  [<ffffffff81091b75>] do_raw_spin_lock+0xe5/0x120
[30.704826]  [<ffffffff81684439>] _raw_spin_lock_bh+0x19/0x20
[30.704826]  [<ffffffffa0096f10>] tipc_subscrb_rcv_cb+0x1d0/0x330 [tipc]
[30.704826]  [<ffffffffa00a37b1>] tipc_receive_from_sock+0xc1/0x150 [tipc]
[30.704826]  [<ffffffffa00a31df>] tipc_recv_work+0x3f/0x80 [tipc]
[30.704826]  [<ffffffff8106a739>] process_one_work+0x149/0x3c0
[30.704826]  [<ffffffff8106aa16>] worker_thread+0x66/0x460
[30.704826]  [<ffffffff8106a9b0>] ? process_one_work+0x3c0/0x3c0
[30.704826]  [<ffffffff8106a9b0>] ? process_one_work+0x3c0/0x3c0
[30.704826]  [<ffffffff8107029d>] kthread+0xed/0x110
[30.704826]  [<ffffffff810701b0>] ? kthread_create_on_node+0x190/0x190
[30.704826]  [<ffffffff81684bdf>] ret_from_fork+0x3f/0x70

In this commit,
1. we remove the check for the return code for mod_timer()
2. we protect tipc_subscrb_get() using the subscriber spin lock.
   We increment the subscriber's refcount as soon as we add the
   subscription to subscriber's subscription list.

Acked-by: Ying Xue <ying.xue@windriver.com>
Reviewed-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: Parthasarathy Bhuvaragan <parthasarathy.bhuvaragan@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agotipc: hold subscriber->lock for tipc_nametbl_subscribe()
Parthasarathy Bhuvaragan [Tue, 2 Feb 2016 09:52:14 +0000 (10:52 +0100)]
tipc: hold subscriber->lock for tipc_nametbl_subscribe()

Until now, while creating a subscription the subscriber lock
protects only the subscribers subscription list and not the
nametable. The call to tipc_nametbl_subscribe() is outside
the lock. However, at subscription timeout and cancel both
the subscribers subscription list and the nametable are
protected by the subscriber lock.

This asymmetric locking mechanism leads to the following problem:
In a SMP system, the timer can be fire on another core before
the create request is complete.
When the timer thread calls tipc_nametbl_unsubscribe() before create
thread calls tipc_nametbl_subscribe(), we get a nullptr exception.

This can be simulated by creating subscription with timeout=0 and
sometimes the timeout occurs before the create request is complete.

The following is the oops:
[57.569661] BUG: unable to handle kernel NULL pointer dereference at (null)
[57.577498] IP: [<ffffffffa02135aa>] tipc_nametbl_unsubscribe+0x8a/0x120 [tipc]
[57.584820] PGD 0
[57.586834] Oops: 0002 [#1] SMP
[57.685506] CPU: 14 PID: 10077 Comm: kworker/u40:1 Tainted: P OENX 3.12.48-52.27.1.     9688.1.PTF-default #1
[57.703637] Workqueue: tipc_rcv tipc_recv_work [tipc]
[57.708697] task: ffff88064c7f00c0 ti: ffff880629ef4000 task.ti: ffff880629ef4000
[57.716181] RIP: 0010:[<ffffffffa02135aa>]  [<ffffffffa02135aa>] tipc_nametbl_unsubscribe+0x8a/   0x120 [tipc]
[...]
[57.812327] Call Trace:
[57.814806]  [<ffffffffa0211c77>] tipc_subscrp_delete+0x37/0x90 [tipc]
[57.821357]  [<ffffffffa0211e2f>] tipc_subscrp_timeout+0x3f/0x70 [tipc]
[57.827982]  [<ffffffff810618c1>] call_timer_fn+0x31/0x100
[57.833490]  [<ffffffff81062709>] run_timer_softirq+0x1f9/0x2b0
[57.839414]  [<ffffffff8105a795>] __do_softirq+0xe5/0x230
[57.844827]  [<ffffffff81520d1c>] call_softirq+0x1c/0x30
[57.850150]  [<ffffffff81004665>] do_softirq+0x55/0x90
[57.855285]  [<ffffffff8105aa35>] irq_exit+0x95/0xa0
[57.860290]  [<ffffffff815215b5>] smp_apic_timer_interrupt+0x45/0x60
[57.866644]  [<ffffffff8152005d>] apic_timer_interrupt+0x6d/0x80
[57.872686]  [<ffffffffa02121c5>] tipc_subscrb_rcv_cb+0x2a5/0x3f0 [tipc]
[57.879425]  [<ffffffffa021c65f>] tipc_receive_from_sock+0x9f/0x100 [tipc]
[57.886324]  [<ffffffffa021c826>] tipc_recv_work+0x26/0x60 [tipc]
[57.892463]  [<ffffffff8106fb22>] process_one_work+0x172/0x420
[57.898309]  [<ffffffff8107079a>] worker_thread+0x11a/0x3c0
[57.903871]  [<ffffffff81077114>] kthread+0xb4/0xc0
[57.908751]  [<ffffffff8151f318>] ret_from_fork+0x58/0x90

In this commit, we do the following at subscription creation:
1. set the subscription's subscriber pointer before performing
   tipc_nametbl_subscribe(), as this value is required further in
   the call chain ex: by tipc_subscrp_send_event().
2. move tipc_nametbl_subscribe() under the scope of subscriber lock

Acked-by: Ying Xue <ying.xue@windriver.com>
Reviewed-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: Parthasarathy Bhuvaragan <parthasarathy.bhuvaragan@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agotipc: fix connection abort when receiving invalid cancel request
Parthasarathy Bhuvaragan [Tue, 2 Feb 2016 09:52:13 +0000 (10:52 +0100)]
tipc: fix connection abort when receiving invalid cancel request

Until now, the subscribers endianness for a subscription
create/cancel request is determined as:
    swap = !(s->filter & (TIPC_SUB_PORTS | TIPC_SUB_SERVICE))
The checks are performed only for port/service subscriptions.

The swap calculation is incorrect if the filter in the subscription
cancellation request is set to TIPC_SUB_CANCEL (it's a malformed
cancel request, as the corresponding subscription create filter
is missing).
Thus, the check if the request is for cancellation fails and the
request is treated as a subscription create request. The
subscription creation fails as the request is illegal, which
terminates this connection.

In this commit we determine the endianness by including
TIPC_SUB_CANCEL, which will set swap correctly and the
request is processed as a cancellation request.

Acked-by: Ying Xue <ying.xue@windriver.com>
Reviewed-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: Parthasarathy Bhuvaragan <parthasarathy.bhuvaragan@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agotipc: fix connection abort during subscription cancellation
Parthasarathy Bhuvaragan [Tue, 2 Feb 2016 09:52:12 +0000 (10:52 +0100)]
tipc: fix connection abort during subscription cancellation

In 'commit 7fe8097cef5f ("tipc: fix nullpointer bug when subscribing
to events")', we terminate the connection if the subscription
creation fails.
In the same commit, the subscription creation result was based on
the value of subscription pointer (set in the function) instead of
the return code.

Unfortunately, the same function also handles subscription
cancellation request. For a subscription cancellation request,
the subscription pointer cannot be set. Thus the connection is
terminated during cancellation request.

In this commit, we move the subcription cancel check outside
of tipc_subscrp_create(). Hence,
- tipc_subscrp_create() will create a subscripton
- tipc_subscrb_rcv_cb() will subscribe or cancel a subscription.

Fixes: 'commit 7fe8097cef5f ("tipc: fix nullpointer bug when subscribing to events")'

Acked-by: Ying Xue <ying.xue@windriver.com>
Reviewed-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: Parthasarathy Bhuvaragan <parthasarathy.bhuvaragan@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agotipc: introduce tipc_subscrb_subscribe() routine
Parthasarathy Bhuvaragan [Tue, 2 Feb 2016 09:52:11 +0000 (10:52 +0100)]
tipc: introduce tipc_subscrb_subscribe() routine

In this commit, we split tipc_subscrp_create() into two:
1. tipc_subscrp_create() creates a subscription
2. A new function tipc_subscrp_subscribe() adds the
   subscription to the subscriber subscription list,
   activates the subscription timer and subscribes to
   the nametable updates.

In future commits, the purpose of tipc_subscrb_rcv_cb() will
be to either subscribe or cancel a subscription.

There is no functional change in this commit.

Acked-by: Ying Xue <ying.xue@windriver.com>
Reviewed-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: Parthasarathy Bhuvaragan <parthasarathy.bhuvaragan@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agotipc: remove struct tipc_name_seq from struct tipc_subscription
Parthasarathy Bhuvaragan [Tue, 2 Feb 2016 09:52:10 +0000 (10:52 +0100)]
tipc: remove struct tipc_name_seq from struct tipc_subscription

Until now, struct tipc_subscriber has duplicate fields for
type, upper and lower (as member of struct tipc_name_seq) at:
1. as member seq in struct tipc_subscription
2. as member seq in struct tipc_subscr, which is contained
   in struct tipc_event
The former structure contains the type, upper and lower
values in network byte order and the later contains the
intact copy of the request.
The struct tipc_subscription contains a field swap to
determine if request needs network byte order conversion.
Thus by using swap, we can convert the request when
required instead of duplicating it.

In this commit,
1. we remove the references to these elements as members of
   struct tipc_subscription and replace them with elements
   from struct tipc_subscr.
2. provide new functions to convert the user request into
   network byte order.

Acked-by: Ying Xue <ying.xue@windriver.com>
Reviewed-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: Parthasarathy Bhuvaragan <parthasarathy.bhuvaragan@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agotipc: remove filter and timeout elements from struct tipc_subscription
Parthasarathy Bhuvaragan [Tue, 2 Feb 2016 09:52:09 +0000 (10:52 +0100)]
tipc: remove filter and timeout elements from struct tipc_subscription

Until now, struct tipc_subscription has duplicate timeout and filter
attributes present:
1. directly as members of struct tipc_subscription
2. in struct tipc_subscr, which is contained in struct tipc_event

In this commit, we remove the references to these elements as
members of struct tipc_subscription and replace them with elements
from struct tipc_subscr.

Acked-by: Ying Xue <ying.xue@windriver.com>
Reviewed-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: Parthasarathy Bhuvaragan <parthasarathy.bhuvaragan@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agotipc: remove incorrect check for subscription timeout value
Parthasarathy Bhuvaragan [Tue, 2 Feb 2016 09:52:08 +0000 (10:52 +0100)]
tipc: remove incorrect check for subscription timeout value

Until now, during subscription creation we set sub->timeout by
converting the timeout request value in milliseconds to jiffies.
This is followed by setting the timeout value in the timer if
sub->timeout != TIPC_WAIT_FOREVER.

For a subscription create request with a timeout value of
TIPC_WAIT_FOREVER, msecs_to_jiffies(TIPC_WAIT_FOREVER)
returns MAX_JIFFY_OFFSET (0xfffffffe). This is not equal to
TIPC_WAIT_FOREVER (0xffffffff).

In this commit, we remove this check.

Acked-by: Ying Xue <ying.xue@windriver.com>
Reviewed-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: Parthasarathy Bhuvaragan <parthasarathy.bhuvaragan@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agobonding: add slave device name for debug
Zhang Shengju [Tue, 2 Feb 2016 08:32:55 +0000 (08:32 +0000)]
bonding: add slave device name for debug

netdev_dbg() will add bond device name, it will be helpful if we print
slave device name.

Signed-off-by: Zhang Shengju <zhangshengju@cmss.chinamobile.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agobgmac: add helper checking for BCM4707 / BCM53018 chip id
Rafał Miłecki [Tue, 2 Feb 2016 06:47:14 +0000 (07:47 +0100)]
bgmac: add helper checking for BCM4707 / BCM53018 chip id

Chipsets with BCM4707 / BCM53018 ID require special handling at a few
places in the code. It's likely there will be more IDs to check in the
future. To simplify it add this trivial helper.

Signed-off-by: Rafał Miłecki <zajec5@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'bpf-per-cpu-maps'
David S. Miller [Sat, 6 Feb 2016 08:34:46 +0000 (03:34 -0500)]
Merge branch 'bpf-per-cpu-maps'

Alexei Starovoitov says:

====================
bpf: introduce per-cpu maps

We've started to use bpf to trace every packet and atomic add
instruction (event JITed) started to show up in perf profile.
The solution is to do per-cpu counters.
For PERCPU_(HASH|ARRAY) map the existing bpf_map_lookup() helper
returns per-cpu area which bpf programs can use to store and
increment the counters. The BPF_MAP_LOOKUP_ELEM syscall command
returns areas from all cpus and user process aggregates the counters.
The usage example is in patch 6. The api turned out to be very
easy to use from bpf program and from user space.
Long term we were discussing to add 'bounded loop' instruction,
so bpf programs can do aggregation within the program which may
help some use cases. Right now user space aggregation of
per-cpu counters fits the best.

This patch set is new approach for per-cpu hash and array maps.
I've reused the map tests written by Martin and Ming, but
implementation and api is new. Old discussion here:
http://thread.gmane.org/gmane.linux.kernel/2123800/focus=2126435
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agosamples/bpf: update tracex[23] examples to use per-cpu maps
Alexei Starovoitov [Tue, 2 Feb 2016 06:39:58 +0000 (22:39 -0800)]
samples/bpf: update tracex[23] examples to use per-cpu maps

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agosamples/bpf: unit test for BPF_MAP_TYPE_PERCPU_ARRAY
tom.leiming@gmail.com [Tue, 2 Feb 2016 06:39:57 +0000 (22:39 -0800)]
samples/bpf: unit test for BPF_MAP_TYPE_PERCPU_ARRAY

A sanity test for BPF_MAP_TYPE_PERCPU_ARRAY

Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agosamples/bpf: unit test for BPF_MAP_TYPE_PERCPU_HASH
Martin KaFai Lau [Tue, 2 Feb 2016 06:39:56 +0000 (22:39 -0800)]
samples/bpf: unit test for BPF_MAP_TYPE_PERCPU_HASH

A sanity test for BPF_MAP_TYPE_PERCPU_HASH.

Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agobpf: add lookup/update support for per-cpu hash and array maps
Alexei Starovoitov [Tue, 2 Feb 2016 06:39:55 +0000 (22:39 -0800)]
bpf: add lookup/update support for per-cpu hash and array maps

The functions bpf_map_lookup_elem(map, key, value) and
bpf_map_update_elem(map, key, value, flags) need to get/set
values from all-cpus for per-cpu hash and array maps,
so that user space can aggregate/update them as necessary.

Example of single counter aggregation in user space:
  unsigned int nr_cpus = sysconf(_SC_NPROCESSORS_CONF);
  long values[nr_cpus];
  long value = 0;

  bpf_lookup_elem(fd, key, values);
  for (i = 0; i < nr_cpus; i++)
    value += values[i];

The user space must provide round_up(value_size, 8) * nr_cpus
array to get/set values, since kernel will use 'long' copy
of per-cpu values to try to copy good counters atomically.
It's a best-effort, since bpf programs and user space are racing
to access the same memory.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agobpf: introduce BPF_MAP_TYPE_PERCPU_ARRAY map
Alexei Starovoitov [Tue, 2 Feb 2016 06:39:54 +0000 (22:39 -0800)]
bpf: introduce BPF_MAP_TYPE_PERCPU_ARRAY map

Primary use case is a histogram array of latency
where bpf program computes the latency of block requests or other
events and stores histogram of latency into array of 64 elements.
All cpus are constantly running, so normal increment is not accurate,
bpf_xadd causes cache ping-pong and this per-cpu approach allows
fastest collision-free counters.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agobpf: introduce BPF_MAP_TYPE_PERCPU_HASH map
Alexei Starovoitov [Tue, 2 Feb 2016 06:39:53 +0000 (22:39 -0800)]
bpf: introduce BPF_MAP_TYPE_PERCPU_HASH map

Introduce BPF_MAP_TYPE_PERCPU_HASH map type which is used to do
accurate counters without need to use BPF_XADD instruction which turned
out to be too costly for high-performance network monitoring.
In the typical use case the 'key' is the flow tuple or other long
living object that sees a lot of events per second.

bpf_map_lookup_elem() returns per-cpu area.
Example:
struct {
  u32 packets;
  u32 bytes;
} * ptr = bpf_map_lookup_elem(&map, &key);
/* ptr points to this_cpu area of the value, so the following
 * increments will not collide with other cpus
 */
ptr->packets ++;
ptr->bytes += skb->len;

bpf_update_elem() atomically creates a new element where all per-cpu
values are zero initialized and this_cpu value is populated with
given 'value'.
Note that non-per-cpu hash map always allocates new element
and then deletes old after rcu grace period to maintain atomicity
of update. Per-cpu hash map updates element values in-place.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoethtool: Declare netdev_rss_key as __read_mostly.
Kim Jones [Tue, 2 Feb 2016 03:51:16 +0000 (03:51 +0000)]
ethtool: Declare netdev_rss_key as __read_mostly.

netdev_rss_key is written to once and thereafter is read by
drivers when they are initialising. The fact that it is mostly
read and not written to makes it a candidate for a __read_mostly
declaration.

Signed-off-by: Kim Jones <kim-marie.jones@intel.com>
Signed-off-by: Alan Carey <alan.carey@intel.com>
Acked-by: Rami Rosen <rami.rosen@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'tcp_fast_open_synack_fin'
David S. Miller [Sat, 6 Feb 2016 08:12:10 +0000 (03:12 -0500)]
Merge branch 'tcp_fast_open_synack_fin'

Eric Dumazet says:

====================
tcp: fastopen: accept data/FIN present in SYNACK

Implements RFC 7413 (TCP Fast Open) 4.2.2, accepting payload and/or FIN
in SYNACK messages, and prepare removal of SYN flag in tcp_recvmsg()
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agotcp: do not enqueue skb with SYN flag
Eric Dumazet [Tue, 2 Feb 2016 05:03:08 +0000 (21:03 -0800)]
tcp: do not enqueue skb with SYN flag

If we remove the SYN flag from the skbs that tcp_fastopen_add_skb()
places in socket receive queue, then we can remove the test that
tcp_recvmsg() has to perform in fast path.

All we have to do is to adjust SEQ in the slow path.

For the moment, we place an unlikely() and output a message
if we find an skb having SYN flag set.
Goal would be to get rid of the test completely.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agotcp: fastopen: accept data/FIN present in SYNACK message
Eric Dumazet [Tue, 2 Feb 2016 05:03:07 +0000 (21:03 -0800)]
tcp: fastopen: accept data/FIN present in SYNACK message

RFC 7413 (TCP Fast Open) 4.2.2 states that the SYNACK message
MAY include data and/or FIN

This patch adds support for the client side :

If we receive a SYNACK with payload or FIN, queue the skb instead
of ignoring it.

Since we already support the same for SYN, we refactor the existing
code and reuse it. Note we need to clone the skb, so this operation
might fail under memory pressure.

Sara Dickinson pointed out FreeBSD server Fast Open implementation
was planned to generate such SYNACK in the future.

The server side might be implemented on linux later.

Reported-by: Sara Dickinson <sara@sinodun.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'rx_nohandler'
David S. Miller [Sat, 6 Feb 2016 07:59:58 +0000 (02:59 -0500)]
Merge branch 'rx_nohandler'

Jarod Wilson says:

====================
net: add and use rx_nohandler stat counter

The network core tries to keep track of dropped packets, but some packets
you wouldn't really call dropped, so much as intentionally ignored, under
certain circumstances. One such case is that of bonding and team device
slaves that are currently inactive. Their respective rx_handler functions
return RX_HANDLER_EXACT (the only places in the kernel that return that),
which ends up tracking into the network core's __netif_receive_skb_core()
function's drop path, with no pt_prev set. On a noisy network, this can
result in a very rapidly incrementing rx_dropped counter, not only on the
inactive slave(s), but also on the master device, such as the following:

$ cat /proc/net/dev
Inter-|   Receive                                                |  Transmit
 face |bytes    packets errs drop fifo frame compressed multicast|bytes    packets errs drop fifo colls carrier compressed
  p7p1: 14783346  140430    0 140428    0     0          0      2040      680       8    0    0    0     0       0          0
  p7p2: 14805198  140648    0    0    0     0          0      2034        0       0    0    0    0     0       0          0
 bond0: 53365248  532798    0 421160    0     0          0    115151     2040      24    0    0    0     0       0          0
    lo:    5420      54    0    0    0     0          0         0     5420      54    0    0    0     0       0          0
  p5p1: 19292195  196197    0 140368    0     0          0     56564      680       8    0    0    0     0       0          0
  p5p2: 19289707  196171    0 140364    0     0          0     56547      680       8    0    0    0     0       0          0
   em3: 20996626  158214    0    0    0     0          0       383        0       0    0    0    0     0       0          0
   em2: 14065122  138462    0    0    0     0          0       310        0       0    0    0    0     0       0          0
   em1: 14063162  138440    0    0    0     0          0       308        0       0    0    0    0     0       0          0
   em4: 21050830  158729    0    0    0     0          0       385    71662     469    0    0    0     0       0          0
   ib0:       0       0    0    0    0     0          0         0        0       0    0    0    0     0       0          0

In this scenario, p5p1, p5p2 and p7p1 are all inactive slaves in an
active-backup bond0, and you can see that all three have high drop counts,
with the master bond0 showing a tally of all three.

I know that this was previously discussed some here:

    http://www.spinics.net/lists/netdev/msg226341.html

It seems additional counters never came to fruition, so this is a first
attempt at creating one of them, so that we stop calling these drops,
which for users monitoring rx_dropped, causes great alarm, and renders the
counter much less useful for them.

This adds a sysfs statistics node and makes the counter available via
netlink.

Additionally, I'm not certain if this set qualifies for net, or if it
should be put aside and resubmitted for net-next after 4.5 is put to
bed, but I do have users who consider this an important bugfix.

This has been tested quite a bit on x86_64, and now lightly on i686 as
well, to verify functionality of updates to netdev_stats_to_stats64()
on 32-bit arches.
====================

Signed-off-by: Jarod Wilson <jarod@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agobond: track sum of rx_nohandler for all slaves
Jarod Wilson [Mon, 1 Feb 2016 23:51:07 +0000 (18:51 -0500)]
bond: track sum of rx_nohandler for all slaves

Sample output with this set applied for an active-backup bond:

$ cat /sys/devices/virtual/net/bond0/lower_p7p1/statistics/rx_nohandler
16568
$ cat /sys/devices/virtual/net/bond0/lower_p5p2/statistics/rx_nohandler
16583
$ cat /sys/devices/virtual/net/bond0/statistics/rx_nohandler
33151

CC: Jay Vosburgh <j.vosburgh@gmail.com>
CC: Veaceslav Falico <vfalico@gmail.com>
CC: Andy Gospodarek <gospo@cumulusnetworks.com>
CC: netdev@vger.kernel.org
Signed-off-by: Jarod Wilson <jarod@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>