David S. Miller [Thu, 22 Sep 2016 05:40:07 +0000 (01:40 -0400)]
Merge branch 'cxgb4-tc-offload'
Rahul Lakkireddy says:
====================
cxgb4: add support for offloading TC u32 filters
This series of patches add support to offload TC u32 filters onto
Chelsio NICs.
Patch 1 moves current common filter code to separate files
in order to provide a common api for performing packet classification
and filtering in Chelsio NICs.
Patch 2 enables filters for normal NIC configuration and implements
common api for setting and deleting filters.
Patches 3-5 add support for TC u32 offload via ndo_setup_tc.
---
v3:
Based on review and suggestion from David Miller <davem@davemloft.net>
- Fixed all local variable declarations by placing them in longest line
first and shortest line last order.
v2:
Based on review and suggestions from Jiri Pirko <jiri@resnulli.us>:
- Replaced macros S and U with appropriate static helper functions.
- Moved completion code for set and delete filters to respective
functions cxgb4_set_filter() and cxgb4_del_filter(). Renamed the
original functions to __cxgb4_set_filter() and __cxgb4_del_filter()
in case synchronization is not required.
- Dropped debugfs patch.
- Merged code for inserting and deleting u32 filters into a single
patch.
- Reworked and fixed bugs with traversing the actions list.
- Removed all unnecessary extra ().
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Rahul Lakkireddy [Tue, 20 Sep 2016 11:43:10 +0000 (17:13 +0530)]
cxgb4: add support for drop and redirect actions
Add support for dropping matched packets in hardware. Also add support
for re-directing matched packets to a specified port in hardware.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rahul Lakkireddy [Tue, 20 Sep 2016 11:43:09 +0000 (17:13 +0530)]
cxgb4: add support for offloading u32 filters
Add support for offloading u32 filter onto hardware. Links are stored
in a jump table to perform necessary jumps to match TCP/UDP header.
When inserting rules in the linked bucket, the TCP/UDP match fields
in the corresponding entry of the jump table are appended to the filter
rule before insertion. If a link is deleted, then all corresponding
filters associated with the link are also deleted. Also enable
hardware tc offload as a supported feature.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rahul Lakkireddy [Tue, 20 Sep 2016 11:43:08 +0000 (17:13 +0530)]
cxgb4: add parser to translate u32 filters to internal spec
Parse information sent by u32 into internal filter specification.
Add support for parsing several fields in IPv4, IPv6, TCP, and UDP.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rahul Lakkireddy [Tue, 20 Sep 2016 11:43:07 +0000 (17:13 +0530)]
cxgb4: add common api support for configuring filters
Enable filters for non-offload configuration and add common api support
for setting and deleting filters in LE-TCAM region of the hardware.
IPv4 filters occupy one slot. IPv6 filters occupy 4 slots and must
be on a 4-slot boundary. IPv4 filters can not occupy a slot belonging
to IPv6 and the vice-versa is also true.
Filters are set and deleted asynchronously. Use completion to wait
for reply from firmware in order to allow for synchronization if needed.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rahul Lakkireddy [Tue, 20 Sep 2016 11:43:06 +0000 (17:13 +0530)]
cxgb4: move common filter code to separate file
Move common filter code to separate files. Also fix the following
checkpatch checks.
CHECK: Comparison to NULL could be written "!f->l2t"
+ if (f->l2t == NULL) {
CHECK: spaces preferred around that '/' (ctx:VxV)
+ fwr->len16_pkd = htonl(FW_WR_LEN16_V(sizeof(*fwr)/16));
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Shmulik Ladkani [Tue, 20 Sep 2016 09:48:37 +0000 (12:48 +0300)]
net: skbuff: Coding: Use eth_type_vlan() instead of open coding it
Fix 'skb_vlan_pop' to use eth_type_vlan instead of directly comparing
skb->protocol to ETH_P_8021Q or ETH_P_8021AD.
Signed-off-by: Shmulik Ladkani <shmulik.ladkani@gmail.com>
Reviewed-by: Pravin B Shelar <pshelar@ovn.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Shmulik Ladkani [Tue, 20 Sep 2016 09:48:36 +0000 (12:48 +0300)]
net: skbuff: Remove errornous length validation in skb_vlan_pop()
In
93515d53b1
"net: move vlan pop/push functions into common code"
skb_vlan_pop was moved from its private location in openvswitch to
skbuff common code.
In case skb has non hw-accel vlan tag, the original 'pop_vlan()' assured
that skb->len is sufficient (if skb->len < VLAN_ETH_HLEN then pop was
considered a no-op).
This validation was moved as is into the new common 'skb_vlan_pop'.
Alas, in its original location (openvswitch), there was a guarantee that
'data' points to the mac_header, therefore the 'skb->len < VLAN_ETH_HLEN'
condition made sense.
However there's no such guarantee in the generic 'skb_vlan_pop'.
For short packets received in rx path going through 'skb_vlan_pop',
this causes 'skb_vlan_pop' to fail pop-ing a valid vlan hdr (in the non
hw-accel case) or to fail moving next tag into hw-accel tag.
Remove the 'skb->len < VLAN_ETH_HLEN' condition entirely:
It is superfluous since inner '__skb_vlan_pop' already verifies there
are VLAN_ETH_HLEN writable bytes at the mac_header.
Note this presents a slight change to skb_vlan_pop() users:
In case total length is smaller than VLAN_ETH_HLEN, skb_vlan_pop() now
returns an error, as opposed to previous "no-op" behavior.
Existing callers (e.g. tc act vlan, ovs) usually drop the packet if
'skb_vlan_pop' fails.
Fixes:
93515d53b1 ("net: move vlan pop/push functions into common code")
Signed-off-by: Shmulik Ladkani <shmulik.ladkani@gmail.com>
Cc: Pravin Shelar <pshelar@ovn.org>
Reviewed-by: Pravin B Shelar <pshelar@ovn.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 22 Sep 2016 05:34:26 +0000 (01:34 -0400)]
Merge branch 'vlan_act_modify'
Shmulik Ladkani says:
====================
act_vlan: Introduce TCA_VLAN_ACT_MODIFY vlan action
TCA_VLAN_ACT_MODIFY allows one to change an existing tag.
It accepts same attributes as TCA_VLAN_ACT_PUSH (protocol, id,
priority).
If packet is vlan tagged, then the tag gets overwritten according to
user specified attributes.
For example, this allows user to replace a tag's vid while preserving
its priority bits (as opposed to "action vlan pop pipe action vlan push").
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Shmulik Ladkani [Mon, 19 Sep 2016 16:11:10 +0000 (19:11 +0300)]
net/sched: act_vlan: Introduce TCA_VLAN_ACT_MODIFY vlan action
TCA_VLAN_ACT_MODIFY allows one to change an existing tag.
It accepts same attributes as TCA_VLAN_ACT_PUSH (protocol, id,
priority).
If packet is vlan tagged, then the tag gets overwritten according to
user specified attributes.
For example, this allows user to replace a tag's vid while preserving
its priority bits (as opposed to "action vlan pop pipe action vlan push").
Signed-off-by: Shmulik Ladkani <shmulik.ladkani@gmail.com>
Acked-by: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Shmulik Ladkani [Mon, 19 Sep 2016 16:11:09 +0000 (19:11 +0300)]
net: skbuff: Export __skb_vlan_pop
This exports the functionality of extracting the tag from the payload,
without moving next vlan tag into hw accel tag.
Signed-off-by: Shmulik Ladkani <shmulik.ladkani@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 22 Sep 2016 01:52:51 +0000 (21:52 -0400)]
Merge branch 'mlx4-next'
Tariq Toukan says:
====================
mlx4 misc cleanups and improvements
This patchset contains some cleanups and improvements from the team
to the mlx4 Eth and core drivers.
Series generated against net-next commit:
5a7a5555a362 'net sched: stylistic cleanups'
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Jack Morgenstein [Tue, 20 Sep 2016 11:39:42 +0000 (14:39 +0300)]
net/mlx4_core: Fix deadlock when switching between polling and event fw commands
When switching from polling-based fw commands to event-based fw
commands, there is a race condition which could cause a fw command
in another task to hang: that task will keep waiting for the polling
sempahore, but may never be able to acquire it. This is due to
mlx4_cmd_use_events, which "down"s the sempahore back to 0.
During driver initialization, this is not a problem, since no other
tasks which invoke FW commands are active.
However, there is a problem if the driver switches to polling mode
and then back to event mode during normal operation.
The "test_interrupts" feature does exactly that.
Running "ethtool -t <eth device> offline" causes the PF driver to
temporarily switch to polling mode, and then back to event mode.
(Note that for VF drivers, such switching is not performed).
Fix this by adding a read-write semaphore for protection when
switching between modes.
Fixes:
225c7b1feef1 ("IB/mlx4: Add a driver Mellanox ConnectX InfiniBand adapters")
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Leon Romanovsky [Tue, 20 Sep 2016 11:39:41 +0000 (14:39 +0300)]
net/mlx4_core: Use RCU to perform radix tree lookup for SRQ
Radix tree lookup can be performed without locking.
Fixes:
225c7b1feef1 ("IB/mlx4: Add a driver Mellanox ConnectX InfiniBand adapters")
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Suggested-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Kamal Heib [Tue, 20 Sep 2016 11:39:40 +0000 (14:39 +0300)]
net/mlx4_en: Fix wrong indentation
Use tabs instead of spaces before if statement, no functional change.
Fixes:
e7c1c2c46201 ("mlx4_en: Added self diagnostics test implementation")
Signed-off-by: Kamal Heib <kamalh@mellanox.com>
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Tariq Toukan [Tue, 20 Sep 2016 11:39:39 +0000 (14:39 +0300)]
net/mlx4_en: Add branch prediction hints in RX data-path
Add likely/unlikely hints to improve branch predictions
in the RX data-path.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 21 Sep 2016 23:50:15 +0000 (19:50 -0400)]
Merge branch 'bpf-hw-offload'
Jakub Kicinski says:
====================
BPF hardware offload (cls_bpf for now)
Rebased and improved.
v7:
- fix patch 4.
v6 (patch 8 only):
- explicitly check for registers >= MAX_BPF_REG;
- fix leaky error path.
v5:
- fix names of guard defines in bpf_verfier.h.
v4:
- rename parser -> analyzer;
- reorganize the analyzer patches a bit;
- use bitfield.h directly.
--- merge blurb:
In the last year a lot of progress have been made on offloading
simpler TC classifiers. There is also growing interest in using
BPF for generic high-speed packet processing in the kernel.
It seems beneficial to tie those two trends together and think
about hardware offloads of BPF programs. This patch set presents
such offload to Netronome smart NICs. cls_bpf is extended with
hardware offload capabilities and NFP driver gets a JIT translator
which in presence of capable firmware can be used to offload
the BPF program onto the card.
BPF JIT implementation is not 100% complete (e.g. missing instructions)
but it is functional. Encouragingly it should be possible to
offload most (if not all) advanced BPF features onto the NIC -
including packet modification, maps, tunnel encap/decap etc.
Example of basic tests I used:
__section_cls_entry
int cls_entry(struct __sk_buff *skb)
{
if (load_byte(skb, 0) != 0x0)
return 0;
if (load_byte(skb, 4) != 0x1)
return 0;
skb->mark = 0xcafe;
if (load_byte(skb, 50) != 0xff)
return 0;
return ~0U;
}
Above code can be compiled with Clang and loaded like this:
ethtool -K p1p1 hw-tc-offload on
tc qdisc add dev p1p1 ingress
tc filter add dev p1p1 parent ffff: bpf obj prog.o action drop
This set implements the basic transparent offload, the skip_{sw,hw}
flags and reporting statistics for cls_bpf.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Wed, 21 Sep 2016 10:44:07 +0000 (11:44 +0100)]
nfp: bpf: add offload of TC direct action mode
Add offload of TC in direct action mode. We just need
to provide appropriate checks in the verifier and
a new outro block to translate the exit codes to what
data path expects
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Wed, 21 Sep 2016 10:44:06 +0000 (11:44 +0100)]
nfp: bpf: add support for legacy redirect action
Data path has redirect support so expressing redirect
to the port frame came from is a trivial matter of
setting the right result code.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Wed, 21 Sep 2016 10:44:05 +0000 (11:44 +0100)]
net: act_mirred: allow statistic updates from offloaded actions
Implement .stats_update() callback. The implementation
is generic and can be reused by other simple actions if
needed.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Wed, 21 Sep 2016 10:44:04 +0000 (11:44 +0100)]
nfp: bpf: add packet marking support
Add missing ABI defines and eBPF instructions to allow
mark to be passed on and extend prepend parsing on the
RX path to pick it up from packet metadata.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Wed, 21 Sep 2016 10:44:03 +0000 (11:44 +0100)]
nfp: bpf: allow offloaded filters to update stats
Periodically poll stats and call into offloaded actions
to update them.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Wed, 21 Sep 2016 10:44:02 +0000 (11:44 +0100)]
net: cls_bpf: allow offloaded filters to update stats
Call into offloaded filters to update stats.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Wed, 21 Sep 2016 10:44:01 +0000 (11:44 +0100)]
nfp: bpf: add hardware bpf offload
Add hardware bpf offload on our smart NICs. Detect if
capable firmware is loaded and use it to load the code JITed
with just added translator onto programmable engines.
This commit only supports offloading cls_bpf in legacy mode
(non-direct action).
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Wed, 21 Sep 2016 10:44:00 +0000 (11:44 +0100)]
nfp: add BPF to NFP code translator
Add translator for JITing eBPF to operations which
can be executed on NFP's programmable engines.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Wed, 21 Sep 2016 10:43:59 +0000 (11:43 +0100)]
bpf: recognize 64bit immediate loads as consts
When running as parser interpret BPF_LD | BPF_IMM | BPF_DW
instructions as loading CONST_IMM with the value stored
in imm. The verifier will continue not recognizing those
due to concerns about search space/program complexity
increase.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Wed, 21 Sep 2016 10:43:58 +0000 (11:43 +0100)]
bpf: enable non-core use of the verfier
Advanced JIT compilers and translators may want to use
eBPF verifier as a base for parsers or to perform custom
checks and validations.
Add ability for external users to invoke the verifier
and provide callbacks to be invoked for every intruction
checked. For now only add most basic callback for
per-instruction pre-interpretation checks is added. More
advanced users may also like to have per-instruction post
callback and state comparison callback.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Wed, 21 Sep 2016 10:43:57 +0000 (11:43 +0100)]
bpf: expose internal verfier structures
Move verifier's internal structures to a header file and
prefix their names with bpf_ to avoid potential namespace
conflicts. Those structures will soon be used by external
analyzers.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Wed, 21 Sep 2016 10:43:56 +0000 (11:43 +0100)]
bpf: don't (ab)use instructions to store state
Storing state in reserved fields of instructions makes
it impossible to run verifier on programs already
marked as read-only. Allocate and use an array of
per-instruction state instead.
While touching the error path rename and move existing
jump target.
Suggested-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Wed, 21 Sep 2016 10:43:55 +0000 (11:43 +0100)]
net: cls_bpf: add support for marking filters as hardware-only
Add cls_bpf support for the TCA_CLS_FLAGS_SKIP_SW flag.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Wed, 21 Sep 2016 10:43:54 +0000 (11:43 +0100)]
net: cls_bpf: limit hardware offload by software-only flag
Add cls_bpf support for the TCA_CLS_FLAGS_SKIP_HW flag.
Unlike U32 and flower cls_bpf already has some netlink
flags defined. Create a new attribute to be able to use
the same flag values as the above.
Unlike U32 and flower reject unknown flags.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Wed, 21 Sep 2016 10:43:53 +0000 (11:43 +0100)]
net: cls_bpf: add hardware offload
This patch adds hardware offload capability to cls_bpf classifier,
similar to what have been done with U32 and flower.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 21 Sep 2016 05:01:09 +0000 (01:01 -0400)]
Merge branch 'mlxse-resource-query'
Jiri Pirko says:
====================
mlxsw: Replace Hw related const with resource query results
Nogah says:
Many of the ASIC's properties can be read from the HW with resources query.
This patchset adds new resources to the resource query and implement
using them, instead of the constants that we currently use.
Those resources are lag, kvd and router related.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Nogah Frankel [Tue, 20 Sep 2016 09:16:57 +0000 (11:16 +0200)]
mlxsw: spectrum: Implement max rif resource
Replace max rif const with using the result from resource query.
Signed-off-by: Nogah Frankel <nogahf@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nogah Frankel [Tue, 20 Sep 2016 09:16:56 +0000 (11:16 +0200)]
mlxsw: pci: Add max router interface resource
Add the max number of rif (router interfaces) to resource query.
Signed-off-by: Nogah Frankel <nogahf@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nogah Frankel [Tue, 20 Sep 2016 09:16:55 +0000 (11:16 +0200)]
mlxsw: pci: Add some miscellaneous resources
Add max system ports, max regions and max vlan groups to resource query.
Signed-off-by: Nogah Frankel <nogahf@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nogah Frankel [Tue, 20 Sep 2016 09:16:54 +0000 (11:16 +0200)]
mlxsw: spectrum: Implement max virtual routers resource
Replace max virtual routers const with the result from
the resource query.
Signed-off-by: Nogah Frankel <nogahf@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nogah Frankel [Tue, 20 Sep 2016 09:16:53 +0000 (11:16 +0200)]
mlxsw: pci: Add max virtual routers resource
Add the max number of virtual routers to resource query.
Signed-off-by: Nogah Frankel <nogahf@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nogah Frankel [Tue, 20 Sep 2016 09:16:52 +0000 (11:16 +0200)]
mlxsw: profile: Add KVD resources to profile config
Use resources from resource query to determine values for
the profile configuration.
Add KVD determined section sizes to the resources struct.
Change the profile struct and value to match this changes.
Signed-off-by: Nogah Frankel <nogahf@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nogah Frankel [Tue, 20 Sep 2016 09:16:51 +0000 (11:16 +0200)]
mlxsw: pci: Add KVD size relate resources
Add KVD size, and minimum sizes for the single and double
sections resources to resources query.
Signed-off-by: Nogah Frankel <nogahf@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nogah Frankel [Tue, 20 Sep 2016 09:16:50 +0000 (11:16 +0200)]
mlxsw: spectrum: lag resources- use resources data instead of consts
Use max lag and max ports in lag resources as the result of resource query
instead of using const to save them.
Signed-off-by: Nogah Frankel <nogahf@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nogah Frankel [Tue, 20 Sep 2016 09:16:49 +0000 (11:16 +0200)]
mlxsw: pci: Add lag related resources to resources query
Add max lag and max ports in lag resources to resources query.
Signed-off-by: Nogah Frankel <nogahf@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Or Gerlitz [Tue, 20 Sep 2016 05:14:08 +0000 (08:14 +0300)]
mlxsw: spectrum: Make offloads stats functions static
The offloads stats functions are local to this file, make them static.
Fixes:
fc1bbb0f1831 ('mlxsw: spectrum: Implement offload stats ndo [..]')
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 21 Sep 2016 04:23:09 +0000 (00:23 -0400)]
Merge branch 'tcp-bbr'
Neal Cardwell says:
====================
tcp: BBR congestion control algorithm
This patch series implements a new TCP congestion control algorithm:
BBR (Bottleneck Bandwidth and RTT). A paper with a detailed
description of BBR will be published in ACM Queue, September-October
2016, as "BBR: Congestion-Based Congestion Control". BBR is widely
deployed in production at Google.
The patch series starts with a set of supporting infrastructure
changes, including a few that extend the congestion control
framework. The last patch adds BBR as a TCP congestion control
module. Please see individual patches for the details.
- v3 -> v4:
- Updated tcp_bbr.c in "tcp_bbr: add BBR congestion control"
to use const to qualify all the constant parameters.
Thanks to Stephen Hemminger.
- In "tcp_bbr: add BBR congestion control", remove the bbr_rate_kbps()
function, which had a 64-bit divide that would be problematic on some
architectures, and just use bbr_rate_bytes_per_sec() directly.
Thanks to Kenneth Klette Jonassen for suggesting this.
- In "tcp: switch back to proper tcp_skb_cb size check in tcp_init()",
switched from sizeof(skb->cb) to FIELD_SIZEOF.
Thanks to Lance Richardson for suggesting this.
- Updated "tcp_bbr: add BBR congestion control" commit message with
performance data, more details about deployment at Google, and
another reminder to use fq with BBR.
- Updated tcp_bbr.c in "tcp_bbr: add BBR congestion control"
to use MODULE_LICENSE("Dual BSD/GPL").
- v2 -> v3: fix another issue caught by build bots:
- adjust rate_sample struct initialization syntax to allow gcc-4.4 to compile
the "tcp: track data delivery rate for a TCP connection" patch; also
adjusted some similar syntax in "tcp_bbr: add BBR congestion control"
- v1 -> v2: fix issues caught by build bots:
- fix "tcp: export data delivery rate" to use rate64 instead of rate,
so there is a 64-bit numerator for the do_div call
- fix conflicting definitions for minmax caused by
"tcp: use windowed min filter library for TCP min_rtt estimation"
with a new commit:
tcp: cdg: rename struct minmax in tcp_cdg.c to avoid a naming conflict
- fix warning about the use of __packed in
"tcp: track data delivery rate for a TCP connection",
which involves the addition of a new commit:
tcp: switch back to proper tcp_skb_cb size check in tcp_init()
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Neal Cardwell [Tue, 20 Sep 2016 03:39:23 +0000 (23:39 -0400)]
tcp_bbr: add BBR congestion control
This commit implements a new TCP congestion control algorithm: BBR
(Bottleneck Bandwidth and RTT). A detailed description of BBR will be
published in ACM Queue, Vol. 14 No. 5, September-October 2016, as
"BBR: Congestion-Based Congestion Control".
BBR has significantly increased throughput and reduced latency for
connections on Google's internal backbone networks and google.com and
YouTube Web servers.
BBR requires only changes on the sender side, not in the network or
the receiver side. Thus it can be incrementally deployed on today's
Internet, or in datacenters.
The Internet has predominantly used loss-based congestion control
(largely Reno or CUBIC) since the 1980s, relying on packet loss as the
signal to slow down. While this worked well for many years, loss-based
congestion control is unfortunately out-dated in today's networks. On
today's Internet, loss-based congestion control causes the infamous
bufferbloat problem, often causing seconds of needless queuing delay,
since it fills the bloated buffers in many last-mile links. On today's
high-speed long-haul links using commodity switches with shallow
buffers, loss-based congestion control has abysmal throughput because
it over-reacts to losses caused by transient traffic bursts.
In 1981 Kleinrock and Gale showed that the optimal operating point for
a network maximizes delivered bandwidth while minimizing delay and
loss, not only for single connections but for the network as a
whole. Finding that optimal operating point has been elusive, since
any single network measurement is ambiguous: network measurements are
the result of both bandwidth and propagation delay, and those two
cannot be measured simultaneously.
While it is impossible to disambiguate any single bandwidth or RTT
measurement, a connection's behavior over time tells a clearer
story. BBR uses a measurement strategy designed to resolve this
ambiguity. It combines these measurements with a robust servo loop
using recent control systems advances to implement a distributed
congestion control algorithm that reacts to actual congestion, not
packet loss or transient queue delay, and is designed to converge with
high probability to a point near the optimal operating point.
In a nutshell, BBR creates an explicit model of the network pipe by
sequentially probing the bottleneck bandwidth and RTT. On the arrival
of each ACK, BBR derives the current delivery rate of the last round
trip, and feeds it through a windowed max-filter to estimate the
bottleneck bandwidth. Conversely it uses a windowed min-filter to
estimate the round trip propagation delay. The max-filtered bandwidth
and min-filtered RTT estimates form BBR's model of the network pipe.
Using its model, BBR sets control parameters to govern sending
behavior. The primary control is the pacing rate: BBR applies a gain
multiplier to transmit faster or slower than the observed bottleneck
bandwidth. The conventional congestion window (cwnd) is now the
secondary control; the cwnd is set to a small multiple of the
estimated BDP (bandwidth-delay product) in order to allow full
utilization and bandwidth probing while bounding the potential amount
of queue at the bottleneck.
When a BBR connection starts, it enters STARTUP mode and applies a
high gain to perform an exponential search to quickly probe the
bottleneck bandwidth (doubling its sending rate each round trip, like
slow start). However, instead of continuing until it fills up the
buffer (i.e. a loss), or until delay or ACK spacing reaches some
threshold (like Hystart), it uses its model of the pipe to estimate
when that pipe is full: it estimates the pipe is full when it notices
the estimated bandwidth has stopped growing. At that point it exits
STARTUP and enters DRAIN mode, where it reduces its pacing rate to
drain the queue it estimates it has created.
Then BBR enters steady state. In steady state, PROBE_BW mode cycles
between first pacing faster to probe for more bandwidth, then pacing
slower to drain any queue that created if no more bandwidth was
available, and then cruising at the estimated bandwidth to utilize the
pipe without creating excess queue. Occasionally, on an as-needed
basis, it sends significantly slower to probe for RTT (PROBE_RTT
mode).
BBR has been fully deployed on Google's wide-area backbone networks
and we're experimenting with BBR on Google.com and YouTube on a global
scale. Replacing CUBIC with BBR has resulted in significant
improvements in network latency and application (RPC, browser, and
video) metrics. For more details please refer to our upcoming ACM
Queue publication.
Example performance results, to illustrate the difference between BBR
and CUBIC:
Resilience to random loss (e.g. from shallow buffers):
Consider a netperf TCP_STREAM test lasting 30 secs on an emulated
path with a 10Gbps bottleneck, 100ms RTT, and 1% packet loss
rate. CUBIC gets 3.27 Mbps, and BBR gets 9150 Mbps (2798x higher).
Low latency with the bloated buffers common in today's last-mile links:
Consider a netperf TCP_STREAM test lasting 120 secs on an emulated
path with a 10Mbps bottleneck, 40ms RTT, and 1000-packet bottleneck
buffer. Both fully utilize the bottleneck bandwidth, but BBR
achieves this with a median RTT 25x lower (43 ms instead of 1.09
secs).
Our long-term goal is to improve the congestion control algorithms
used on the Internet. We are hopeful that BBR can help advance the
efforts toward this goal, and motivate the community to do further
research.
Test results, performance evaluations, feedback, and BBR-related
discussions are very welcome in the public e-mail list for BBR:
https://groups.google.com/forum/#!forum/bbr-dev
NOTE: BBR *must* be used with the fq qdisc ("man tc-fq") with pacing
enabled, since pacing is integral to the BBR design and
implementation. BBR without pacing would not function properly, and
may incur unnecessary high packet loss rates.
Signed-off-by: Van Jacobson <vanj@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Nandita Dukkipati <nanditad@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Neal Cardwell [Tue, 20 Sep 2016 03:39:22 +0000 (23:39 -0400)]
tcp: increase ICSK_CA_PRIV_SIZE from 64 bytes to 88
The TCP CUBIC module already uses 64 bytes.
The upcoming TCP BBR module uses 88 bytes.
Signed-off-by: Van Jacobson <vanj@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Nandita Dukkipati <nanditad@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Yuchung Cheng [Tue, 20 Sep 2016 03:39:21 +0000 (23:39 -0400)]
tcp: new CC hook to set sending rate with rate_sample in any CA state
This commit introduces an optional new "omnipotent" hook,
cong_control(), for congestion control modules. The cong_control()
function is called at the end of processing an ACK (i.e., after
updating sequence numbers, the SACK scoreboard, and loss
detection). At that moment we have precise delivery rate information
the congestion control module can use to control the sending behavior
(using cwnd, TSO skb size, and pacing rate) in any CA state.
This function can also be used by a congestion control that prefers
not to use the default cwnd reduction approach (i.e., the PRR
algorithm) during CA_Recovery to control the cwnd and sending rate
during loss recovery.
We take advantage of the fact that recent changes defer the
retransmission or transmission of new data (e.g. by F-RTO) in recovery
until the new tcp_cong_control() function is run.
With this commit, we only run tcp_update_pacing_rate() if the
congestion control is not using this new API. New congestion controls
which use the new API do not want the TCP stack to run the default
pacing rate calculation and overwrite whatever pacing rate they have
chosen at initialization time.
Signed-off-by: Van Jacobson <vanj@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Nandita Dukkipati <nanditad@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Yuchung Cheng [Tue, 20 Sep 2016 03:39:20 +0000 (23:39 -0400)]
tcp: allow congestion control to expand send buffer differently
Currently the TCP send buffer expands to twice cwnd, in order to allow
limited transmits in the CA_Recovery state. This assumes that cwnd
does not increase in the CA_Recovery.
For some congestion control algorithms, like the upcoming BBR module,
if the losses in recovery do not indicate congestion then we may
continue to raise cwnd multiplicatively in recovery. In such cases the
current multiplier will falsely limit the sending rate, much as if it
were limited by the application.
This commit adds an optional congestion control callback to use a
different multiplier to expand the TCP send buffer. For congestion
control modules that do not specificy this callback, TCP continues to
use the previous default of 2.
Signed-off-by: Van Jacobson <vanj@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Nandita Dukkipati <nanditad@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Neal Cardwell [Tue, 20 Sep 2016 03:39:19 +0000 (23:39 -0400)]
tcp: export tcp_mss_to_mtu() for congestion control modules
Export tcp_mss_to_mtu(), so that congestion control modules can use
this to help calculate a pacing rate.
Signed-off-by: Van Jacobson <vanj@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Nandita Dukkipati <nanditad@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Neal Cardwell [Tue, 20 Sep 2016 03:39:18 +0000 (23:39 -0400)]
tcp: export tcp_tso_autosize() and parameterize minimum number of TSO segments
To allow congestion control modules to use the default TSO auto-sizing
algorithm as one of the ingredients in their own decision about TSO sizing:
1) Export tcp_tso_autosize() so that CC modules can use it.
2) Change tcp_tso_autosize() to allow callers to specify a minimum
number of segments per TSO skb, in case the congestion control
module has a different notion of the best floor for TSO skbs for
the connection right now. For very low-rate paths or policed
connections it can be appropriate to use smaller TSO skbs.
Signed-off-by: Van Jacobson <vanj@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Nandita Dukkipati <nanditad@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Neal Cardwell [Tue, 20 Sep 2016 03:39:17 +0000 (23:39 -0400)]
tcp: allow congestion control module to request TSO skb segment count
Add the tso_segs_goal() function in tcp_congestion_ops to allow the
congestion control module to specify the number of segments that
should be in a TSO skb sent by tcp_write_xmit() and
tcp_xmit_retransmit_queue(). The congestion control module can either
request a particular number of segments in TSO skb that we transmit,
or return 0 if it doesn't care.
This allows the upcoming BBR congestion control module to select small
TSO skb sizes if the module detects that the bottleneck bandwidth is
very low, or that the connection is policed to a low rate.
Signed-off-by: Van Jacobson <vanj@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Nandita Dukkipati <nanditad@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Yuchung Cheng [Tue, 20 Sep 2016 03:39:16 +0000 (23:39 -0400)]
tcp: export data delivery rate
This commit export two new fields in struct tcp_info:
tcpi_delivery_rate: The most recent goodput, as measured by
tcp_rate_gen(). If the socket is limited by the sending
application (e.g., no data to send), it reports the highest
measurement instead of the most recent. The unit is bytes per
second (like other rate fields in tcp_info).
tcpi_delivery_rate_app_limited: A boolean indicating if the goodput
was measured when the socket's throughput was limited by the
sending application.
This delivery rate information can be useful for applications that
want to know the current throughput the TCP connection is seeing,
e.g. adaptive bitrate video streaming. It can also be very useful for
debugging or troubleshooting.
Signed-off-by: Van Jacobson <vanj@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Nandita Dukkipati <nanditad@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Soheil Hassas Yeganeh [Tue, 20 Sep 2016 03:39:15 +0000 (23:39 -0400)]
tcp: track application-limited rate samples
This commit adds code to track whether the delivery rate represented
by each rate_sample was limited by the application.
Upon each transmit, we store in the is_app_limited field in the skb a
boolean bit indicating whether there is a known "bubble in the pipe":
a point in the rate sample interval where the sender was
application-limited, and did not transmit even though the cwnd and
pacing rate allowed it.
This logic marks the flow app-limited on a write if *all* of the
following are true:
1) There is less than 1 MSS of unsent data in the write queue
available to transmit.
2) There is no packet in the sender's queues (e.g. in fq or the NIC
tx queue).
3) The connection is not limited by cwnd.
4) There are no lost packets to retransmit.
The tcp_rate_check_app_limited() code in tcp_rate.c determines whether
the connection is application-limited at the moment. If the flow is
application-limited, it sets the tp->app_limited field. If the flow is
application-limited then that means there is effectively a "bubble" of
silence in the pipe now, and this silence will be reflected in a lower
bandwidth sample for any rate samples from now until we get an ACK
indicating this bubble has exited the pipe: specifically, until we get
an ACK for the next packet we transmit.
When we send every skb we record in scb->tx.is_app_limited whether the
resulting rate sample will be application-limited.
The code in tcp_rate_gen() checks to see when it is safe to mark all
known application-limited bubbles of silence as having exited the
pipe. It does this by checking to see when the delivered count moves
past the tp->app_limited marker. At this point it zeroes the
tp->app_limited marker, as all known bubbles are out of the pipe.
We make room for the tx.is_app_limited bit in the skb by borrowing a
bit from the in_flight field used by NV to record the number of bytes
in flight. The receive window in the TCP header is 16 bits, and the
max receive window scaling shift factor is 14 (RFC 1323). So the max
receive window offered by the TCP protocol is 2^(16+14) = 2^30. So we
only need 30 bits for the tx.in_flight used by NV.
Signed-off-by: Van Jacobson <vanj@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Nandita Dukkipati <nanditad@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Yuchung Cheng [Tue, 20 Sep 2016 03:39:14 +0000 (23:39 -0400)]
tcp: track data delivery rate for a TCP connection
This patch generates data delivery rate (throughput) samples on a
per-ACK basis. These rate samples can be used by congestion control
modules, and specifically will be used by TCP BBR in later patches in
this series.
Key state:
tp->delivered: Tracks the total number of data packets (original or not)
delivered so far. This is an already-existing field.
tp->delivered_mstamp: the last time tp->delivered was updated.
Algorithm:
A rate sample is calculated as (d1 - d0)/(t1 - t0) on a per-ACK basis:
d1: the current tp->delivered after processing the ACK
t1: the current time after processing the ACK
d0: the prior tp->delivered when the acked skb was transmitted
t0: the prior tp->delivered_mstamp when the acked skb was transmitted
When an skb is transmitted, we snapshot d0 and t0 in its control
block in tcp_rate_skb_sent().
When an ACK arrives, it may SACK and ACK some skbs. For each SACKed
or ACKed skb, tcp_rate_skb_delivered() updates the rate_sample struct
to reflect the latest (d0, t0).
Finally, tcp_rate_gen() generates a rate sample by storing
(d1 - d0) in rs->delivered and (t1 - t0) in rs->interval_us.
One caveat: if an skb was sent with no packets in flight, then
tp->delivered_mstamp may be either invalid (if the connection is
starting) or outdated (if the connection was idle). In that case,
we'll re-stamp tp->delivered_mstamp.
At first glance it seems t0 should always be the time when an skb was
transmitted, but actually this could over-estimate the rate due to
phase mismatch between transmit and ACK events. To track the delivery
rate, we ensure that if packets are in flight then t0 and and t1 are
times at which packets were marked delivered.
If the initial and final RTTs are different then one may be corrupted
by some sort of noise. The noise we see most often is sending gaps
caused by delayed, compressed, or stretched acks. This either affects
both RTTs equally or artificially reduces the final RTT. We approach
this by recording the info we need to compute the initial RTT
(duration of the "send phase" of the window) when we recorded the
associated inflight. Then, for a filter to avoid bandwidth
overestimates, we generalize the per-sample bandwidth computation
from:
bw = delivered / ack_phase_rtt
to the following:
bw = delivered / max(send_phase_rtt, ack_phase_rtt)
In large-scale experiments, this filtering approach incorporating
send_phase_rtt is effective at avoiding bandwidth overestimates due to
ACK compression or stretched ACKs.
Signed-off-by: Van Jacobson <vanj@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Nandita Dukkipati <nanditad@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Neal Cardwell [Tue, 20 Sep 2016 03:39:13 +0000 (23:39 -0400)]
tcp: count packets marked lost for a TCP connection
Count the number of packets that a TCP connection marks lost.
Congestion control modules can use this loss rate information for more
intelligent decisions about how fast to send.
Specifically, this is used in TCP BBR policer detection. BBR uses a
high packet loss rate as one signal in its policer detection and
policer bandwidth estimation algorithm.
The BBR policer detection algorithm cannot simply track retransmits,
because a retransmit can be (and often is) an indicator of packets
lost long, long ago. This is particularly true in a long CA_Loss
period that repairs the initial massive losses when a policer kicks
in.
Signed-off-by: Van Jacobson <vanj@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Nandita Dukkipati <nanditad@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Tue, 20 Sep 2016 03:39:12 +0000 (23:39 -0400)]
tcp: switch back to proper tcp_skb_cb size check in tcp_init()
Revert to the tcp_skb_cb size check that tcp_init() had before commit
b4772ef879a8 ("net: use common macro for assering skb->cb[] available
size in protocol families"). As related commit
744d5a3e9fe2 ("net:
move skb->dropcount to skb->cb[]") explains, the
sock_skb_cb_check_size() mechanism was added to ensure that there is
space for dropcount, "for protocol families using it". But TCP is not
a protocol using dropcount, so tcp_init() doesn't need to provision
space for dropcount in the skb->cb[], and thus we can revert to the
older form of the tcp_skb_cb size check. Doing so allows TCP to use 4
more bytes of the skb->cb[] space.
Fixes:
b4772ef879a8 ("net: use common macro for assering skb->cb[] available size in protocol families")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Tue, 20 Sep 2016 03:39:11 +0000 (23:39 -0400)]
net_sched: sch_fq: add low_rate_threshold parameter
This commit adds to the fq module a low_rate_threshold parameter to
insert a delay after all packets if the socket requests a pacing rate
below the threshold.
This helps achieve more precise control of the sending rate with
low-rate paths, especially policers. The basic issue is that if a
congestion control module detects a policer at a certain rate, it may
want fq to be able to shape to that policed rate. That way the sender
can avoid policer drops by having the packets arrive at the policer at
or just under the policed rate.
The default threshold of 550Kbps was chosen analytically so that for
policers or links at 500Kbps or 512Kbps fq would very likely invoke
this mechanism, even if the pacing rate was briefly slightly above the
available bandwidth. This value was then empirically validated with
two years of production testing on YouTube video servers.
Signed-off-by: Van Jacobson <vanj@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Nandita Dukkipati <nanditad@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Neal Cardwell [Tue, 20 Sep 2016 03:39:10 +0000 (23:39 -0400)]
tcp: use windowed min filter library for TCP min_rtt estimation
Refactor the TCP min_rtt code to reuse the new win_minmax library in
lib/win_minmax.c to simplify the TCP code.
This is a pure refactor: the functionality is exactly the same. We
just moved the windowed min code to make TCP easier to read and
maintain, and to allow other parts of the kernel to use the windowed
min/max filter code.
Signed-off-by: Van Jacobson <vanj@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Nandita Dukkipati <nanditad@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Neal Cardwell [Tue, 20 Sep 2016 03:39:09 +0000 (23:39 -0400)]
lib/win_minmax: windowed min or max estimator
This commit introduces a generic library to estimate either the min or
max value of a time-varying variable over a recent time window. This
is code originally from Kathleen Nichols. The current form of the code
is from Van Jacobson.
A single struct minmax_sample will track the estimated windowed-max
value of the series if you call minmax_running_max() or the estimated
windowed-min value of the series if you call minmax_running_min().
Nearly equivalent code is already in place for minimum RTT estimation
in the TCP stack. This commit extracts that code and generalizes it to
handle both min and max. Moving the code here reduces the footprint
and complexity of the TCP code base and makes the filter generally
available for other parts of the codebase, including an upcoming TCP
congestion control module.
This library works well for time series where the measurements are
smoothly increasing or decreasing.
Signed-off-by: Van Jacobson <vanj@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Nandita Dukkipati <nanditad@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Soheil Hassas Yeganeh [Tue, 20 Sep 2016 03:39:08 +0000 (23:39 -0400)]
tcp: cdg: rename struct minmax in tcp_cdg.c to avoid a naming conflict
The upcoming change "lib/win_minmax: windowed min or max estimator"
introduces a struct called minmax, which is then included in
include/linux/tcp.h in the upcoming change "tcp: use windowed min
filter library for TCP min_rtt estimation". This would create a
compilation error for tcp_cdg.c, which defines its own minmax
struct. To avoid this naming conflict (and potentially others in the
future), this commit renames the version used in tcp_cdg.c to
cdg_minmax.
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Kenneth Klette Jonassen <kennetkl@ifi.uio.no>
Acked-by: Kenneth Klette Jonassen <kennetkl@ifi.uio.no>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sean Wang [Tue, 20 Sep 2016 03:26:48 +0000 (11:26 +0800)]
net: ethernet: mediatek: enhance with avoiding superfluous assignment inside mtk_get_ethtool_stats
data_src is unchanged inside the loop, so this patch moves
the assignment to outside the loop to avoid unnecessarily
assignment
Signed-off-by: Sean Wang <sean.wang@mediatek.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vivien Didelot [Mon, 19 Sep 2016 23:56:11 +0000 (19:56 -0400)]
net: dsa: mv88e6xxx: handle multiple ports in ATU
An address can be loaded in the ATU with multiple ports, for instance
when adding multiple ports to a Multicast group with "bridge mdb".
The current code doesn't allow that. Add an helper to get a single entry
from the ATU, then set or clear the requested port, before loading the
entry back in the ATU.
Note that the required _mv88e6xxx_atu_getnext function is defined below
mv88e6xxx_port_db_load_purge, so forward-declare it for the moment. The
ATU code will be isolated in future patches.
Fixes:
83dabd1fa84c ("net: dsa: mv88e6xxx: make switchdev DB ops generic")
Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jamal Hadi Salim [Mon, 19 Sep 2016 23:02:51 +0000 (19:02 -0400)]
net sched actions: fix GETing actions
With the batch changes that translated transient actions into
a temporary list lost in the translation was the fact that
tcf_action_destroy() will eventually delete the action from
the permanent location if the refcount is zero.
Example of what broke:
...add a gact action to drop
sudo $TC actions add action drop index 10
...now retrieve it, looks good
sudo $TC actions get action gact index 10
...retrieve it again and find it is gone!
sudo $TC actions get action gact index 10
Fixes:
22dc13c837c3 ("net_sched: convert tcf_exts from list to pointer array"),
Fixes:
824a7e8863b3 ("net_sched: remove an unnecessary list_del()")
Fixes:
f07fed82ad79 ("net_sched: remove the leftover cleanup_a()")
Acked-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 21 Sep 2016 03:32:19 +0000 (23:32 -0400)]
Merge branch 'bpf-direct-packet-access-improvements'
Daniel Borkmann says:
====================
BPF direct packet access improvements
This set adds write support to the currently available read support
for {cls,act}_bpf programs. First one is a fix for affected commit
sitting in net-next and prerequisite for the second one, last patch
adds a number of test cases against the verifier. For details, please
see individual patches.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Borkmann [Mon, 19 Sep 2016 22:26:14 +0000 (00:26 +0200)]
bpf: add test cases for direct packet access
Add couple of test cases for direct write and the negative size issue, and
also adjust the direct packet access test4 since it asserts that writes are
not possible, but since we've just added support for writes, we need to
invert the verdict to ACCEPT, of course. Summary: 133 PASSED, 0 FAILED.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Borkmann [Mon, 19 Sep 2016 22:26:13 +0000 (00:26 +0200)]
bpf: direct packet write and access for helpers for clsact progs
This work implements direct packet access for helpers and direct packet
write in a similar fashion as already available for XDP types via commits
4acf6c0b84c9 ("bpf: enable direct packet data write for xdp progs") and
6841de8b0d03 ("bpf: allow helpers access the packet directly"), and as a
complementary feature to the already available direct packet read for tc
(cls/act) programs.
For enabling this, we need to introduce two helpers, bpf_skb_pull_data()
and bpf_csum_update(). The first is generally needed for both, read and
write, because they would otherwise only be limited to the current linear
skb head. Usually, when the data_end test fails, programs just bail out,
or, in the direct read case, use bpf_skb_load_bytes() as an alternative
to overcome this limitation. If such data sits in non-linear parts, we
can just pull them in once with the new helper, retest and eventually
access them.
At the same time, this also makes sure the skb is uncloned, which is, of
course, a necessary condition for direct write. As this needs to be an
invariant for the write part only, the verifier detects writes and adds
a prologue that is calling bpf_skb_pull_data() to effectively unclone the
skb from the very beginning in case it is indeed cloned. The heuristic
makes use of a similar trick that was done in
233577a22089 ("net: filter:
constify detection of pkt_type_offset"). This comes at zero cost for other
programs that do not use the direct write feature. Should a program use
this feature only sparsely and has read access for the most parts with,
for example, drop return codes, then such write action can be delegated
to a tail called program for mitigating this cost of potential uncloning
to a late point in time where it would have been paid similarly with the
bpf_skb_store_bytes() as well. Advantage of direct write is that the
writes are inlined whereas the helper cannot make any length assumptions
and thus needs to generate a call to memcpy() also for small sizes, as well
as cost of helper call itself with sanity checks are avoided. Plus, when
direct read is already used, we don't need to cache or perform rechecks
on the data boundaries (due to verifier invalidating previous checks for
helpers that change skb->data), so more complex programs using rewrites
can benefit from switching to direct read plus write.
For direct packet access to helpers, we save the otherwise needed copy into
a temp struct sitting on stack memory when use-case allows. Both facilities
are enabled via may_access_direct_pkt_data() in verifier. For now, we limit
this to map helpers and csum_diff, and can successively enable other helpers
where we find it makes sense. Helpers that definitely cannot be allowed for
this are those part of bpf_helper_changes_skb_data() since they can change
underlying data, and those that write into memory as this could happen for
packet typed args when still cloned. bpf_csum_update() helper accommodates
for the fact that we need to fixup checksum_complete when using direct write
instead of bpf_skb_store_bytes(), meaning the programs can use available
helpers like bpf_csum_diff(), and implement csum_add(), csum_sub(),
csum_block_add(), csum_block_sub() equivalents in eBPF together with the
new helper. A usage example will be provided for iproute2's examples/bpf/
directory.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Borkmann [Mon, 19 Sep 2016 22:26:12 +0000 (00:26 +0200)]
bpf, verifier: enforce larger zero range for pkt on overloading stack buffs
Current contract for the following two helper argument types is:
* ARG_CONST_STACK_SIZE: passed argument pair must be (ptr, >0).
* ARG_CONST_STACK_SIZE_OR_ZERO: passed argument pair can be either
(NULL, 0) or (ptr, >0).
With
6841de8b0d03 ("bpf: allow helpers access the packet directly"), we can
pass also raw packet data to helpers, so depending on the argument type
being PTR_TO_PACKET, we now either assert memory via check_packet_access()
or check_stack_boundary(). As a result, the tests in check_packet_access()
currently allow more than intended with regards to reg->imm.
Back in
969bf05eb3ce ("bpf: direct packet access"), check_packet_access()
was fine to ignore size argument since in check_mem_access() size was
bpf_size_to_bytes() derived and prior to the call to check_packet_access()
guaranteed to be larger than zero.
However, for the above two argument types, it currently means, we can have
a <= 0 size and thus breaking current guarantees for helpers. Enforce a
check for size <= 0 and bail out if so.
check_stack_boundary() doesn't have such an issue since it already tests
for access_size <= 0 and bails out, resp. access_size == 0 in case of NULL
pointer passed when allowed.
Fixes:
6841de8b0d03 ("bpf: allow helpers access the packet directly")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Mahesh Bandewar [Mon, 19 Sep 2016 20:56:29 +0000 (13:56 -0700)]
ipvlan: Fix dependency issue
kbuild-build-bot reported that if NETFILTER is not selected, the
build fails pointing to netfilter symbols.
Fixes:
4fbae7d83c98 ("ipvlan: Introduce l3s mode")
Signed-off-by: Mahesh Bandewar <maheshb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
pravin shelar [Mon, 19 Sep 2016 20:51:00 +0000 (13:51 -0700)]
openvswitch: avoid resetting flow key while installing new flow.
since commit commit
db74a3335e0f6 ("openvswitch: use percpu
flow stats") flow alloc resets flow-key. So there is no need
to reset the flow-key again if OVS is using newly allocated
flow-key.
Signed-off-by: Pravin B Shelar <pshelar@ovn.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
pravin shelar [Mon, 19 Sep 2016 20:50:59 +0000 (13:50 -0700)]
openvswitch: Fix Frame-size larger than 1024 bytes warning.
There is no need to declare separate key on stack,
we can just use sw_flow->key to store the key directly.
This commit fixes following warning:
net/openvswitch/datapath.c: In function ‘ovs_flow_cmd_new’:
net/openvswitch/datapath.c:1080:1: warning: the frame size of 1040 bytes
is larger than 1024 bytes [-Wframe-larger-than=]
Signed-off-by: Pravin B Shelar <pshelar@ovn.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 21 Sep 2016 02:52:50 +0000 (22:52 -0400)]
Merge branch 'for-upstream' of git://git./linux/kernel/git/bluetooth/bluetooth-next
Johan Hedberg says:
====================
pull request: bluetooth-next 2016-09-19
Here's the main bluetooth-next pull request for the 4.9 kernel.
- Added new messages for monitor sockets for better mgmt tracing
- Added local name and appearance support in scan response
- Added new Qualcomm WCNSS SMD based HCI driver
- Minor fixes & cleanup to 802.15.4 code
- New USB ID to btusb driver
- Added Marvell support to HCI UART driver
- Add combined LED trigger for controller power
- Other minor fixes here and there
Please let me know if there are any issues pulling. Thanks.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Alan Cox [Mon, 19 Sep 2016 19:15:24 +0000 (20:15 +0100)]
6pack: fix buffer length mishandling
Dmitry Vyukov wrote:
> different runs). Looking at code, the following looks suspicious -- we
> limit copy by 512 bytes, but use the original count which can be
> larger than 512:
>
> static void sixpack_receive_buf(struct tty_struct *tty,
> const unsigned char *cp, char *fp, int count)
> {
> unsigned char buf[512];
> ....
> memcpy(buf, cp, count < sizeof(buf) ? count : sizeof(buf));
> ....
> sixpack_decode(sp, buf, count1);
With the sane tty locking we now have I believe the following is safe as
we consume the bytes and move them into the decoded buffer before
returning.
Signed-off-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jesper Dangaard Brouer [Mon, 19 Sep 2016 15:46:38 +0000 (17:46 +0200)]
mlx4: add missed recycle opportunity for XDP_TX on TX failure
Correct drop handling for XDP_TX on TX failure, were recently added in
commit
95357907ae73 ("mlx4: fix XDP_TX is acting like XDP_PASS on TX
ring full").
The change missed an opportunity for recycling the RX page, instead of
going through the page allocator, like the regular XDP_DROP action does.
This patch cease the opportunity, by going through the XDP_DROP case.
Fixes:
95357907ae73 ("mlx4: fix XDP_TX is acting like XDP_PASS on TX ring full")
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 20 Sep 2016 08:47:50 +0000 (04:47 -0400)]
Merge branch 'dsa-set_addr-optional'
John Crispin says:
====================
net-next: dsa: set_addr should be optional
The Marvell driver is the only one that actually sets the switches HW
address. All other drivers have an empty stub. fix this by making the
callback optional.
====================
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Crispin [Mon, 19 Sep 2016 13:28:03 +0000 (15:28 +0200)]
net-next: dsa: qca8k: remove empty set_addr() stub
The set_addr() callback is now optional. Remove the empty stub that qca8k
has.
Signed-off-by: John Crispin <john@phrozen.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Crispin [Mon, 19 Sep 2016 13:28:02 +0000 (15:28 +0200)]
net-next: dsa: b53: remove empty set_addr() stub
The set_addr() callback is now optional. Remove the empty stub that b53
has.
Signed-off-by: John Crispin <john@phrozen.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Crispin [Mon, 19 Sep 2016 13:28:01 +0000 (15:28 +0200)]
net-next: dsa: make the set_addr() operation optional
Only 1 of the 3 drivers currently has a set_addr() operation. Make the
set_addr() callback optional to reduce the amount of empty stubs inside
the drivers.
Signed-off-by: John Crispin <john@phrozen.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Crispin [Mon, 19 Sep 2016 13:28:00 +0000 (15:28 +0200)]
net-next: dsa: fix duplicate invocation of set_addr()
commit
83c0afaec7b730b ("net: dsa: Add new binding implementation")
has a duplicate invocation of the set_addr() operation callback. Remove one
of them.
Signed-off-by: John Crispin <john@phrozen.org>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 20 Sep 2016 08:43:43 +0000 (04:43 -0400)]
Merge branch 'rhashtable-dups'
Herbert Xu says:
====================
rhashtable: rhashtable with duplicate objects
v3 fixes a bug in the remove path that causes the element count
to decrease when it shouldn't, leading to a gigantic hash table
when it underflows.
v2 contains a reworked insertion slowpath to ensure that the
spinlock for the table we're inserting into is taken.
This series contains two patches. The first adds the rhlist
interface and the second converts mac80211 to use it. If this
works out I'll then proceed to convert the other insecure_elasticity
users over to this.
I've tested the rhlist code with test_rhashtable but I haven't
tested the mac80211 conversion. So please give it a go and see
if it still works.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Herbert Xu [Mon, 19 Sep 2016 11:00:10 +0000 (19:00 +0800)]
mac80211: Use rhltable instead of rhashtable
mac80211 currently uses rhashtable with insecure_elasticity set
to true. The latter is because of duplicate objects. What's
more, mac80211 walks the rhashtable chains by hand which is broken
as rhashtable may contain multiple tables due to resizing or
rehashing.
This patch fixes it by converting it to the newly added rhltable
interface which is designed for use with duplicate objects.
With rhltable a lookup returns a list of objects instead of a
single one. This is then fed into the existing for_each_sta_info
macro.
This patch also deletes the sta_addr_hash function since rhashtable
defaults to jhash.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Herbert Xu [Mon, 19 Sep 2016 11:00:09 +0000 (19:00 +0800)]
rhashtable: Add rhlist interface
The insecure_elasticity setting is an ugly wart brought out by
users who need to insert duplicate objects (that is, distinct
objects with identical keys) into the same table.
In fact, those users have a much bigger problem. Once those
duplicate objects are inserted, they don't have an interface to
find them (unless you count the walker interface which walks
over the entire table).
Some users have resorted to doing a manual walk over the hash
table which is of course broken because they don't handle the
potential existence of multiple hash tables. The result is that
they will break sporadically when they encounter a hash table
resize/rehash.
This patch provides a way out for those users, at the expense
of an extra pointer per object. Essentially each object is now
a list of objects carrying the same key. The hash table will
only see the lists so nothing changes as far as rhashtable is
concerned.
To use this new interface, you need to insert a struct rhlist_head
into your objects instead of struct rhash_head. While the hash
table is unchanged, for type-safety you'll need to use struct
rhltable instead of struct rhashtable. All the existing interfaces
have been duplicated for rhlist, including the hash table walker.
One missing feature is nulls marking because AFAIK the only potential
user of it does not need duplicate objects. Should anyone need
this it shouldn't be too hard to add.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Thomas Graf <tgraf@suug.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vitaly Kuznetsov [Mon, 19 Sep 2016 10:53:40 +0000 (12:53 +0200)]
xen-netfront: avoid packet loss when ethernet header crosses page boundary
Small packet loss is reported on complex multi host network configurations
including tunnels, NAT, ... My investigation led me to the following check
in netback which drops packets:
if (unlikely(txreq.size < ETH_HLEN)) {
netdev_err(queue->vif->dev,
"Bad packet size: %d\n", txreq.size);
xenvif_tx_err(queue, &txreq, extra_count, idx);
break;
}
But this check itself is legitimate. SKBs consist of a linear part (which
has to have the ethernet header) and (optionally) a number of frags.
Netfront transmits the head of the linear part up to the page boundary
as the first request and all the rest becomes frags so when we're
reconstructing the SKB in netback we can't distinguish between original
frags and the 'tail' of the linear part. The first SKB needs to be at
least ETH_HLEN size. So in case we have an SKB with its linear part
starting too close to the page boundary the packet is lost.
I see two ways to fix the issue:
- Change the 'wire' protocol between netfront and netback to start keeping
the original SKB structure. We'll have to add a flag indicating the fact
that the particular request is a part of the original linear part and not
a frag. We'll need to know the length of the linear part to pre-allocate
memory.
- Avoid transmitting SKBs with linear parts starting too close to the page
boundary. That seems preferable short-term and shouldn't bring
significant performance degradation as such packets are rare. That's what
this patch is trying to achieve with skb_copy().
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Acked-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Raju Lakkaraju [Mon, 19 Sep 2016 10:03:54 +0000 (15:33 +0530)]
net: phy: Add MAC-IF driver for Microsemi PHYs.
All the review comments updated and resending for review.
This is MAC interface feature.
Microsemi PHY can support RGMII, RMII or GMII/MII interface between MAC and PHY.
MAC-IF function program the right value based on Device tree configuration.
Tested on Beaglebone Black with VSC 8531 PHY.
Signed-off-by: Raju Lakkaraju <Raju.Lakkaraju@microsemi.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ido Schimmel [Mon, 19 Sep 2016 06:29:26 +0000 (08:29 +0200)]
mlxsw: spectrum: Fix sparse warnings
drivers/net/ethernet/mellanox/mlxsw//spectrum.c:251:28: warning: symbol
'mlxsw_sp_span_entry_find' was not declared. Should it be static?
drivers/net/ethernet/mellanox/mlxsw//spectrum.c:265:28: warning: symbol
'mlxsw_sp_span_entry_get' was not declared. Should it be static?
drivers/net/ethernet/mellanox/mlxsw//spectrum.c:367:56: warning: mixing
different enum types
drivers/net/ethernet/mellanox/mlxsw//spectrum.c:367:56: int enum
mlxsw_sp_span_type versus
drivers/net/ethernet/mellanox/mlxsw//spectrum.c:367:56: int enum
mlxsw_reg_mpar_i_e
...
drivers/net/ethernet/mellanox/mlxsw//spectrum_buffers.c:598:32: warning:
mixing different enum types
drivers/net/ethernet/mellanox/mlxsw//spectrum_buffers.c:598:32: int
enum mlxsw_reg_sbxx_dir versus
drivers/net/ethernet/mellanox/mlxsw//spectrum_buffers.c:598:32: int
enum devlink_sb_pool_type
drivers/net/ethernet/mellanox/mlxsw//spectrum_buffers.c:600:39: warning:
mixing different enum types
drivers/net/ethernet/mellanox/mlxsw//spectrum_buffers.c:600:39: int
enum mlxsw_reg_sbpr_mode versus
drivers/net/ethernet/mellanox/mlxsw//spectrum_buffers.c:600:39: int
enum devlink_sb_threshold_type
...
drivers/net/ethernet/mellanox/mlxsw//spectrum_router.c:255:54: warning:
mixing different enum types
drivers/net/ethernet/mellanox/mlxsw//spectrum_router.c:255:54: int
enum mlxsw_sp_l3proto versus
drivers/net/ethernet/mellanox/mlxsw//spectrum_router.c:255:54: int
enum mlxsw_reg_ralxx_protocol
...
drivers/net/ethernet/mellanox/mlxsw//spectrum_router.c:1749:6: warning:
symbol 'mlxsw_sp_fib_entry_put' was not declared. Should it be static?
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Elad Raz [Mon, 19 Sep 2016 06:28:24 +0000 (08:28 +0200)]
mlxsw: Change the RX LAG hash function from XOR to CRC
Change the RX hash function from XOR to CRC in order to have better
distribution of the traffic.
Signed-off-by: Elad Raz <eladr@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rob Swindell [Tue, 20 Sep 2016 07:36:33 +0000 (03:36 -0400)]
bnxt_en: Fix build error for kernesl without RTC-LIB
bnxt_hwrm_fw_set_time() now returns -EOPNOTSUPP when built for kernel
without RTC_LIB. Setting the firmware time is not critical to the
successful completion of the firmware update process.
Signed-off-by: Rob Swindell <Rob.Swindell@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jamal Hadi Salim [Sun, 18 Sep 2016 12:45:33 +0000 (08:45 -0400)]
net sched: stylistic cleanups
Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Roman Mashak [Sun, 18 Sep 2016 11:53:08 +0000 (07:53 -0400)]
net sched actions police: peg drop stats for conforming traffic
setting conforming action to drop is a valid policy.
When it is set we need to at least see the stats indicating it
for debugging.
Signed-off-by: Roman Mashak <mrv@mojatatu.com>
Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jamal Hadi Salim [Sun, 18 Sep 2016 11:31:43 +0000 (07:31 -0400)]
net sched ife action: Introduce skb tcindex metadata encap decap
Sample use case of how this is encoded:
user space via tuntap (or a connected VM/Machine/container)
encodes the tcindex TLV.
Sample use case of decoding:
IFE action decodes it and the skb->tc_index is then used to classify.
So something like this for encoded ICMP packets:
.. first decode then reclassify... skb->tcindex will be set
sudo $TC filter add dev $ETH parent ffff: prio 2 protocol 0xbeef \
u32 match u32 0 0 flowid 1:1 \
action ife decode reclassify
...next match the decode icmp packet...
sudo $TC filter add dev $ETH parent ffff: prio 4 protocol ip \
u32 match ip protocol 1 0xff flowid 1:1 \
action continue
... last classify it using the tcindex classifier and do someaction..
sudo $TC filter add dev $ETH parent ffff: prio 5 protocol ip \
handle 0x11 tcindex classid 1:1 \
action blah..
Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jamal Hadi Salim [Sun, 18 Sep 2016 11:31:42 +0000 (07:31 -0400)]
net sched ife action: add 16 bit helpers
encoder and checker for 16 bits metadata
Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Baoyou Xie [Sun, 18 Sep 2016 08:44:22 +0000 (16:44 +0800)]
net/mlx5: clean function declarations in eswitch.c up
We get 2 warnings when building kernel with W=1:
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c:463:5: warning: no previous prototype for 'esw_offloads_init' [-Wmissing-prototypes]
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c:521:6: warning: no previous prototype for 'esw_offloads_cleanup' [-Wmissing-prototypes]
In fact, both functions are declared in
drivers/net/ethernet/mellanox/mlx5/core/eswitch.c,but should be
declared in a header file, thus can be recognized in other file.
So this patch moves the declarations into
drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
Signed-off-by: Baoyou Xie <baoyou.xie@linaro.org>
Acked-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Baoyou Xie [Sun, 18 Sep 2016 08:35:29 +0000 (16:35 +0800)]
be2net: mark symbols static where possible
We get 4 warnings when building kernel with W=1:
drivers/net/ethernet/emulex/benet/be_main.c:4368:6: warning: no previous prototype for 'be_calculate_pf_pool_rss_tables' [-Wmissing-prototypes]
drivers/net/ethernet/emulex/benet/be_cmds.c:4385:5: warning: no previous prototype for 'be_get_nic_pf_num_list' [-Wmissing-prototypes]
drivers/net/ethernet/emulex/benet/be_cmds.c:4537:6: warning: no previous prototype for 'be_reset_nic_desc' [-Wmissing-prototypes]
drivers/net/ethernet/emulex/benet/be_cmds.c:4910:5: warning: no previous prototype for '__be_cmd_set_logical_link_config' [-Wmissing-prototypes]
In fact, these functions are only used in the file in which they are
declared and don't need a declaration, but can be made static.
so this patch marks these functions with 'static'.
Signed-off-by: Baoyou Xie <baoyou.xie@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Baoyou Xie [Sun, 18 Sep 2016 08:26:34 +0000 (16:26 +0800)]
phy: mark lan88xx_suspend() static
We get 1 warning when building kernel with W=1:
drivers/net/phy/microchip.c:58:5: warning: no previous prototype for 'lan88xx_suspend' [-Wmissing-prototypes]
In fact, this function is only used in the file in which it is
declared and don't need a declaration, but can be made static.
so this patch marks this function with 'static'.
Signed-off-by: Baoyou Xie <baoyou.xie@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Philippe Reynes [Sun, 18 Sep 2016 15:16:45 +0000 (17:16 +0200)]
net: ethernet: broadcom: bcmgenet: use new api ethtool_{get|set}_link_ksettings
The ethtool api {get|set}_settings is deprecated.
We move this driver to new api {get|set}_link_ksettings.
Signed-off-by: Philippe Reynes <tremyfr@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Philippe Reynes [Sun, 18 Sep 2016 14:59:07 +0000 (16:59 +0200)]
net: ethernet: broadcom: bcm63xx: use new api ethtool_{get|set}_link_ksettings
The ethtool api {get|set}_settings is deprecated.
We move this driver to new api {get|set}_link_ksettings.
Signed-off-by: Philippe Reynes <tremyfr@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Philippe Reynes [Sun, 18 Sep 2016 14:59:06 +0000 (16:59 +0200)]
net: ethernet: broadcom: bcm63xx: use phydev from struct net_device
The private structure contain a pointer to phydev, but the structure
net_device already contain such pointer. So we can remove the pointer
phydev in the private structure, and update the driver to use the
one contained in struct net_device.
Signed-off-by: Philippe Reynes <tremyfr@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Philippe Reynes [Sat, 17 Sep 2016 22:11:35 +0000 (00:11 +0200)]
net: ethernet: broadcom: b44: use new api ethtool_{get|set}_link_ksettings
The ethtool api {get|set}_settings is deprecated.
We move this driver to new api {get|set}_link_ksettings.
Signed-off-by: Philippe Reynes <tremyfr@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Philippe Reynes [Sat, 17 Sep 2016 22:11:34 +0000 (00:11 +0200)]
net: ethernet: broadcom: b44: use phydev from struct net_device
The private structure contain a pointer to phydev, but the structure
net_device already contain such pointer. So we can remove the pointer
phydev in the private structure, and update the driver to use the
one contained in struct net_device.
Signed-off-by: Philippe Reynes <tremyfr@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 20 Sep 2016 01:32:31 +0000 (21:32 -0400)]
Merge branch 'bnxt_en-next'
Michael Chan says:
====================
bnxt: update for net-next.
Misc. changes and minor bug fixes for net-next. Please review.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Eddie Wai [Mon, 19 Sep 2016 07:58:09 +0000 (03:58 -0400)]
bnxt_en: Fixed the VF link status after a link state change
The VF link state can be changed via the 'ip link set' cmd.
Currently, the new link state does not take effect immediately.
The fix is for the PF to send a link change async event to the
designated VF after a VF link state change. This async event will
trigger the VF to update the link status.
Signed-off-by: Eddie Wai <eddie.wai@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>