GitHub/moto-9609/android_kernel_motorola_exynos9610.git
11 years agoixgbe: Reduce memory consumption with larger page sizes
Anton Blanchard [Tue, 22 Oct 2013 18:34:01 +0000 (18:34 +0000)]
ixgbe: Reduce memory consumption with larger page sizes

The ixgbe driver allocates pages for its receive rings. It currently
uses 512 pages, regardless of page size. During receive handling it
adds the unused part of the page back into the rx ring, avoiding the
need for a new allocation.

On a ppc64 box with 64 threads and 64kB pages, we end up with
512 entries * 64 rx queues * 64kB = 2GB memory used. Even more of a
concern is that we use up 2GB of IOMMU space in order to map all this
memory.

The driver makes a number of decisions based on if PAGE_SIZE is less
than 8kB, so use this as the breakpoint and only allocate 128 entries
on 8kB or larger page sizes.

Signed-off-by: Anton Blanchard <anton@samba.org>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
11 years agoigb: Don't let ethtool try to write to iNVM in i210/i211
Fujinaka, Todd [Wed, 23 Oct 2013 05:52:11 +0000 (05:52 +0000)]
igb: Don't let ethtool try to write to iNVM in i210/i211

Don't let ethtool try to write to iNVM in i210/i211.

This fixes an issue seen by Marek Vasut.

Reported-by: Marek Vasut <marex@denx.de>
Signed-off-by: Todd Fujinaka <todd.fujinaka@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
11 years agoixgbevf: remove redundant workaround
Emil Tantilov [Tue, 29 Oct 2013 08:31:49 +0000 (08:31 +0000)]
ixgbevf: remove redundant workaround

This patch removes a workaround related to header split, which is redundant
because the driver does not support splitting packet headers on Rx.

Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
11 years agoe1000: fix wrong queue idx calculation
Hong Zhiguo [Tue, 22 Oct 2013 18:32:56 +0000 (18:32 +0000)]
e1000: fix wrong queue idx calculation

tx_ring and adapter->tx_ring are already of type "struct
e1000_tx_ring *"

Signed-off-by: Hong Zhiguo <zhiguohong@tencent.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
11 years agoipv6: remove the unnecessary statement in find_match()
Duan Jiong [Wed, 30 Oct 2013 07:39:26 +0000 (15:39 +0800)]
ipv6: remove the unnecessary statement in find_match()

After reading the function rt6_check_neigh(), we can
know that the RT6_NUD_FAIL_SOFT can be returned only
when the IS_ENABLE(CONFIG_IPV6_ROUTER_PREF) is false.
so in function find_match(), there is no need to execute
the statement !IS_ENABLED(CONFIG_IPV6_ROUTER_PREF).

Signed-off-by: Duan Jiong <duanj.fnst@cn.fujitsu.com>
Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agomac802154: Use pr_err(...) rather than printk(KERN_ERR ...)
Chen Weilong [Wed, 30 Oct 2013 07:28:07 +0000 (15:28 +0800)]
mac802154: Use pr_err(...) rather than printk(KERN_ERR ...)

This change is inspired by checkpatch.

Signed-off-by: Weilong Chen <chenweilong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agobgmac: pass received packet to the netif instead of copying it
Rafał Miłecki [Wed, 30 Oct 2013 07:00:00 +0000 (08:00 +0100)]
bgmac: pass received packet to the netif instead of copying it

Copying whole packets with skb_copy_from_linear_data_offset is a pretty
bad idea. CPU was spending time in __copy_user_common and network
performance was lower. With the new solution iperf-measured speed
increased from 116Mb/s to 134Mb/s.

Signed-off-by: Rafał Miłecki <zajec5@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agotipc: remove two indentation levels in tipc_recv_msg routine
Ying Xue [Wed, 30 Oct 2013 03:26:57 +0000 (11:26 +0800)]
tipc: remove two indentation levels in tipc_recv_msg routine

The message dispatching part of tipc_recv_msg() is wrapped layers of
while/if/if/switch, causing out-of-control indentation and does not
look very good. We reduce two indentation levels by separating the
message dispatching from the blocks that checks link state and
sequence numbers, allowing longer function and arg names to be
consistently indented without wrapping. Additionally we also rename
"cont" label to "discard" and add one new label called "unlock_discard"
to make code clearer. In all, these are cosmetic changes that do not
alter the operation of TIPC in any way.

Signed-off-by: Ying Xue <ying.xue@windriver.com>
Reviewed-by: Erik Hugne <erik.hugne@ericsson.com>
Cc: David Laight <david.laight@aculab.com>
Cc: Andreas Bofjäll <andreas.bofjall@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agoarc_emac: drop redundant mac address check
Luka Perkov [Tue, 29 Oct 2013 23:11:00 +0000 (00:11 +0100)]
arc_emac: drop redundant mac address check

Checking if MAC address is valid using is_valid_ether_addr() is already done in
of_get_mac_address(). While at it, reorganize checking so it matches checks in
other drivers.

Signed-off-by: Luka Perkov <luka@openwrt.org>
CC: Alexey Brodkin <Alexey.Brodkin@synopsys.com>
CC: David Miller <davem@davemloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agomvneta: drop redundant mac address check
Luka Perkov [Tue, 29 Oct 2013 23:10:01 +0000 (00:10 +0100)]
mvneta: drop redundant mac address check

Checking if MAC address is valid using is_valid_ether_addr() is already done in
of_get_mac_address().

Signed-off-by: Luka Perkov <luka@openwrt.org>
Acked-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
CC: David Miller <davem@davemloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agoocteon_mgmt: drop redundant mac address check
Luka Perkov [Tue, 29 Oct 2013 23:09:12 +0000 (00:09 +0100)]
octeon_mgmt: drop redundant mac address check

Checking if MAC address is valid using is_valid_ether_addr() is already done in
of_get_mac_address().

Signed-off-by: Luka Perkov <luka@openwrt.org>
Acked-by: David Daney <david.daney@cavium.com>
CC: David Miller <davem@davemloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agotcp: temporarily disable Fast Open on SYN timeout
Yuchung Cheng [Tue, 29 Oct 2013 17:09:05 +0000 (10:09 -0700)]
tcp: temporarily disable Fast Open on SYN timeout

Fast Open currently has a fall back feature to address SYN-data being
dropped but it requires the middle-box to pass on regular SYN retry
after SYN-data. This is implemented in commit aab487435 ("net-tcp:
Fast Open client - detecting SYN-data drops")

However some NAT boxes will drop all subsequent packets after first
SYN-data and blackholes the entire connections.  An example is in
commit 356d7d8 "netfilter: nf_conntrack: fix tcp_in_window for Fast
Open".

The sender should note such incidents and fall back to use the regular
TCP handshake on subsequent attempts temporarily as well: after the
second SYN timeouts the original Fast Open SYN is most likely lost.
When such an event recurs Fast Open is disabled based on the number of
recurrences exponentially.

Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agoMerge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net...
David S. Miller [Tue, 29 Oct 2013 22:57:49 +0000 (18:57 -0400)]
Merge branch 'master' of git://git./linux/kernel/git/jkirsher/net-next

Jeff Kirsher says:

====================
This series contains updates to vxlan, net, ixgbe, ixgbevf, and i40e.

Joseph provides a single patch against vxlan which removes the burden
from the NIC drivers to check if the vxlan driver is enabled in the
kernel and also makes available the vxlan headrooms to the drivers.

Jacob provides majority of the patches, with patches against net, ixgbe
and ixgbevf.  His net patch adds might_sleep() call to napi_disable so
that every use of napi_disable during atomic context will be visible.
Then Jacob provides a patch to fix qv_lock_napi call in
ixgbe_napi_disable_all.  The other ixgbe patches cleanup
ixgbe_check_minimum_link function to correctly show that there are some
minor loss of encoding, even though we don't calculate it and remove
unnecessary duplication of PCIe bandwidth display.  Lastly, Jacob
provides 4 patches against ixgbevf to add ixgbevf_rx_skb in line with
how ixgbe handles the variations on how packets can be received, adds
support in order to track how many packets were cleaned during busy poll
as part of the extended statistics.

Wei Yongjun provides a fix for i40e to return -ENOMEN in the memory
allocation error handling case instead of returning 0, as done
elsewhere in this function.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agonet: mvmdio: doc: mvmdio now used by mv643xx_eth
Leigh Brown [Tue, 29 Oct 2013 09:33:34 +0000 (09:33 +0000)]
net: mvmdio: doc: mvmdio now used by mv643xx_eth

Amend the documentation in the mvmdio driver to note the fact
that it is now used by both the mvneta and mv643xx_eth drivers.

Signed-off-by: Leigh Brown <leigh@solinno.co.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agonet: mvmdio: slight optimisation of orion_mdio_write
Leigh Brown [Tue, 29 Oct 2013 09:33:33 +0000 (09:33 +0000)]
net: mvmdio: slight optimisation of orion_mdio_write

Make only a single call to mutex_unlock in orion_mdio_write.

Signed-off-by: Leigh Brown <leigh@solinno.co.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agonet: mvmdio: orion_mdio_ready: remove manual poll
Leigh Brown [Tue, 29 Oct 2013 09:33:32 +0000 (09:33 +0000)]
net: mvmdio: orion_mdio_ready: remove manual poll

Replace manual poll of MVMDIO_SMI_READ_VALID with a call to
orion_mdio_wait_ready.  This ensures a consistent timeout,
eliminates a busy loop, and allows for use of interrupts on
systems that support them.

Signed-off-by: Leigh Brown <leigh@solinno.co.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agonet: mvmdio: make orion_mdio_wait_ready consistent
Leigh Brown [Tue, 29 Oct 2013 09:33:31 +0000 (09:33 +0000)]
net: mvmdio: make orion_mdio_wait_ready consistent

Amend orion_mdio_wait_ready so that the same timeout is used when
polling or using wait_event_timeout.  Set the timeout to 1ms.

Replace udelay with usleep_range to avoid a busy loop, and set the
polling interval range as 45us to 55us, so that the first sleep
will be enough in almost all cases.

Generate the same log message at timeout when polling or using
wait_event_timeout.

Signed-off-by: Leigh Brown <leigh@solinno.co.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agonet/benet: Make lancer_wait_ready() static
Gavin Shan [Tue, 29 Oct 2013 09:30:57 +0000 (17:30 +0800)]
net/benet: Make lancer_wait_ready() static

The function needn't to be public, so to make it as static.

Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agonet/benet: Remove interface type
Gavin Shan [Tue, 29 Oct 2013 09:30:56 +0000 (17:30 +0800)]
net/benet: Remove interface type

The interface type, which is being traced by "struct be_adapter::
if_type", isn't used currently. So we can remove that safely
according to Sathya's comments.

Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agonetconsole: Convert to pr_<level>
Joe Perches [Mon, 28 Oct 2013 19:53:21 +0000 (12:53 -0700)]
netconsole: Convert to pr_<level>

Use a more current logging style.

Convert printks to pr_<level>.

Consolidate multiple printks into a single printk to avoid
any possible dmesg interleaving.  Add a default "event" msg
in case the listed types are ever expanded.

Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agonet: sched: cls_bpf: add BPF-based classifier
Daniel Borkmann [Mon, 28 Oct 2013 15:43:02 +0000 (16:43 +0100)]
net: sched: cls_bpf: add BPF-based classifier

This work contains a lightweight BPF-based traffic classifier that can
serve as a flexible alternative to ematch-based tree classification, i.e.
now that BPF filter engine can also be JITed in the kernel. Naturally, tc
actions and policies are supported as well with cls_bpf. Multiple BPF
programs/filter can be attached for a class, or they can just as well be
written within a single BPF program, that's really up to the user how he
wishes to run/optimize the code, e.g. also for inversion of verdicts etc.
The notion of a BPF program's return/exit codes is being kept as follows:

     0: No match
    -1: Select classid given in "tc filter ..." command
  else: flowid, overwrite the default one

As a minimal usage example with iproute2, we use a 3 band prio root qdisc
on a router with sfq each as leave, and assign ssh and icmp bpf-based
filters to band 1, http traffic to band 2 and the rest to band 3. For the
first two bands we load the bytecode from a file, in the 2nd we load it
inline as an example:

echo 1 > /proc/sys/net/core/bpf_jit_enable

tc qdisc del dev em1 root
tc qdisc add dev em1 root handle 1: prio bands 3 priomap 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

tc qdisc add dev em1 parent 1:1 sfq perturb 16
tc qdisc add dev em1 parent 1:2 sfq perturb 16
tc qdisc add dev em1 parent 1:3 sfq perturb 16

tc filter add dev em1 parent 1: bpf run bytecode-file /etc/tc/ssh.bpf flowid 1:1
tc filter add dev em1 parent 1: bpf run bytecode-file /etc/tc/icmp.bpf flowid 1:1
tc filter add dev em1 parent 1: bpf run bytecode-file /etc/tc/http.bpf flowid 1:2
tc filter add dev em1 parent 1: bpf run bytecode "`bpfc -f tc -i misc.ops`" flowid 1:3

BPF programs can be easily created and passed to tc, either as inline
'bytecode' or 'bytecode-file'. There are a couple of front-ends that can
compile opcodes, for example:

1) People familiar with tcpdump-like filters:

   tcpdump -iem1 -ddd port 22 | tr '\n' ',' > /etc/tc/ssh.bpf

2) People that want to low-level program their filters or use BPF
   extensions that lack support by libpcap's compiler:

   bpfc -f tc -i ssh.ops > /etc/tc/ssh.bpf

   ssh.ops example code:
   ldh [12]
   jne #0x800, drop
   ldb [23]
   jneq #6, drop
   ldh [20]
   jset #0x1fff, drop
   ldxb 4 * ([14] & 0xf)
   ldh [%x + 14]
   jeq #0x16, pass
   ldh [%x + 16]
   jne #0x16, drop
   pass: ret #-1
   drop: ret #0

It was chosen to load bytecode into tc, since the reverse operation,
tc filter list dev em1, is then able to show the exact commands again.
Possible follow-up work could also include a small expression compiler
for iproute2. Tested with the help of bmon. This idea came up during
the Netfilter Workshop 2013 in Copenhagen. Also thanks to feedback from
Eric Dumazet!

Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Cc: Thomas Graf <tgraf@suug.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agobgmac: separate RX descriptor setup code into a new function
Rafał Miłecki [Mon, 28 Oct 2013 13:40:29 +0000 (14:40 +0100)]
bgmac: separate RX descriptor setup code into a new function

This cleans code a bit and will be useful when allocating buffers in
other places (like RX path, to avoid skb_copy_from_linear_data_offset).

Signed-off-by: Rafał Miłecki <zajec5@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agoi40e: fix error return code in i40e_probe()
Wei Yongjun [Tue, 24 Sep 2013 05:17:25 +0000 (05:17 +0000)]
i40e: fix error return code in i40e_probe()

Fix to return -ENOMEM in the memory alloc error handling
case instead of 0, as done elsewhere in this function.

Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Tested-by: Kavindya Deegala <kavindya.s.deegala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
11 years agoixgbevf: Add zero_base handler to network statistics
Don Skidmore [Wed, 25 Sep 2013 08:03:09 +0000 (08:03 +0000)]
ixgbevf: Add zero_base handler to network statistics

This patch removes the need to keep a zero_base variable in the adapter
structure. Now we just use two different macros to set the non-zero and
zero base. This adds to readability and shortens some of the structure
initialization under 80 columns. The gathering of status for ethtool was
slightly modified to again better fit into 80 columns and become a bit
more readable.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
11 years agoixgbevf: add BP_EXTENDED_STATS for CONFIG_NET_RX_BUSY_POLL
Jacob Keller [Sat, 21 Sep 2013 06:24:25 +0000 (06:24 +0000)]
ixgbevf: add BP_EXTENDED_STATS for CONFIG_NET_RX_BUSY_POLL

This patch adds the extended statistics similar to the ixgbe driver. These
statistics keep track of how often the busy polling yields, as well as how many
packets are cleaned or missed by the polling routine.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
11 years agoixgbevf: implement CONFIG_NET_RX_BUSY_POLL
Jacob Keller [Sat, 21 Sep 2013 06:24:20 +0000 (06:24 +0000)]
ixgbevf: implement CONFIG_NET_RX_BUSY_POLL

This patch enables CONFIG_NET_RX_BUSY_POLL support in the VF code. This enables
sockets which have enabled the SO_BUSY_POLL socket option to use the
ndo_busy_poll_recv operation which could result in lower latency, at the cost
of higher CPU utilization, and increased power usage. This support is similar
to how the ixgbe driver works.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
11 years agoixgbevf: have clean_rx_irq return total_rx_packets cleaned
Jacob Keller [Sat, 21 Sep 2013 06:24:14 +0000 (06:24 +0000)]
ixgbevf: have clean_rx_irq return total_rx_packets cleaned

Rather than return true/false indicating whether there was budget left, return
the total packets cleaned. This currently has no use, but will be used in a
following patch which enables CONFIG_NET_RX_BUSY_POLL support in order to track
how many packets were cleaned during the busy poll as part of the extended
statistics.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
11 years agoixgbevf: add ixgbevf_rx_skb
Jacob Keller [Sat, 21 Sep 2013 06:24:09 +0000 (06:24 +0000)]
ixgbevf: add ixgbevf_rx_skb

This patch adds ixgbevf_rx_skb in line with how ixgbe handles the variations on
how packets can be received. It will be extended in a following patch for
CONFIG_NET_RX_BUSY_POLL support.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
11 years agoixgbe: remove unnecessary duplication of PCIe bandwidth display
Jacob Keller [Fri, 18 Oct 2013 05:09:24 +0000 (05:09 +0000)]
ixgbe: remove unnecessary duplication of PCIe bandwidth display

This patch removes the unnecessary display of PCIe bandwidth twice. Since the
ixgbe_check_minimum_link does a better job, and ensures accurate detection on
even complex chains, this older check is no longer necessary.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
11 years agoixgbe: show <2% for encoding loss on PCIe Gen3
Jacob Keller [Fri, 18 Oct 2013 05:09:19 +0000 (05:09 +0000)]
ixgbe: show <2% for encoding loss on PCIe Gen3

This patch updates the ixgbe_check_minimum_link function to correctly show that
there is some minor loss of encoding, even though we don't calculate it in the
max GT/s equation. It is small enough to not bother, but is better to report it
than not.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
11 years agoixgbe: fix qv_lock_napi call in ixgbe_napi_disable_all
Jacob Keller [Sat, 21 Sep 2013 05:05:44 +0000 (05:05 +0000)]
ixgbe: fix qv_lock_napi call in ixgbe_napi_disable_all

ixgbe_napi_disable_all calls napi_disable on each queue, however the busy
polling code introduced a local_bh_disable()d context around the napi_disable.
The original author did not realize that napi_disable might sleep, which would
cause a sleep while atomic BUG. In addition, on a single processor system, the
ixgbe_qv_lock_napi loop shouldn't have to mdelay. This patch adds an
ixgbe_qv_disable along with a new IXGBE_QV_STATE_DISABLED bit, which it uses to
indicate to the poll and napi routines that the q_vector has been disabled. Now
the ixgbe_napi_disable_all function will wait until all pending work has been
finished and prevent any future work from being started.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Cc: Eliezer Tamir <eliezer.tamir@linux.intel.com>
Cc: Alexander Duyck <alexander.duyck@intel.com>
Cc: Hyong-Youb Kim <hykim@myri.com>
Cc: Amir Vadai <amirv@mellanox.com>
Cc: Dmitry Kravkov <dmitry@broadcom.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
11 years agonet: add might_sleep() call to napi_disable
Jacob Keller [Sat, 21 Sep 2013 05:05:39 +0000 (05:05 +0000)]
net: add might_sleep() call to napi_disable

napi_disable uses an msleep() call to wait for outstanding napi work to be
finished after setting the disable bit. It does not always sleep incase there
was no outstanding work. This resulted in a rare bug in ixgbe_down operation
where a napi_disable call took place inside of a local_bh_disable()d context.
In order to enable easier detection of future sleep while atomic BUGs, this
patch adds a might_sleep() call, so that every use of napi_disable during
atomic context will be visible.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Cc: Eliezer Tamir <eliezer.tamir@linux.intel.com>
Cc: Alexander Duyck <alexander.duyck@intel.com>
Cc: Hyong-Youb Kim <hykim@myri.com>
Cc: Amir Vadai <amirv@mellanox.com>
Cc: Dmitry Kravkov <dmitry@broadcom.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
11 years agovxlan: Have the NIC drivers do less work for offloads
Joseph Gasparakis [Thu, 24 Oct 2013 06:27:10 +0000 (06:27 +0000)]
vxlan: Have the NIC drivers do less work for offloads

This patch removes the burden from the NIC drivers to check if the
vxlan driver is enabled in the kernel and also makes available
the vxlan headrooms to them.

Signed-off-by: Joseph Gasparakis <joseph.gasparakis@intel.com>
Tested-by: Kavindya Deegala <kavindya.s.deegala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
11 years agonet, mc: fix the incorrect comments in two mc-related functions
Zhi Yong Wu [Mon, 28 Oct 2013 08:15:50 +0000 (16:15 +0800)]
net, mc: fix the incorrect comments in two mc-related functions

Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agonet, iovec: fix the incorrect comment in memcpy_fromiovecend()
Zhi Yong Wu [Mon, 28 Oct 2013 06:01:50 +0000 (14:01 +0800)]
net, iovec: fix the incorrect comment in memcpy_fromiovecend()

Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agonet, datagram: fix the incorrect comment in zerocopy_sg_from_iovec()
Zhi Yong Wu [Mon, 28 Oct 2013 06:01:49 +0000 (14:01 +0800)]
net, datagram: fix the incorrect comment in zerocopy_sg_from_iovec()

Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agovxlan: silence one build warning
Zhi Yong Wu [Mon, 28 Oct 2013 06:01:48 +0000 (14:01 +0800)]
vxlan: silence one build warning

drivers/net/vxlan.c: In function ‘vxlan_sock_add’:
drivers/net/vxlan.c:2298:11: warning: ‘sock’ may be used uninitialized in this function [-Wmaybe-uninitialized]
drivers/net/vxlan.c:2275:17: note: ‘sock’ was declared here
  LD      drivers/net/built-in.o

Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agoipv4: fix DO and PROBE pmtu mode regarding local fragmentation with UFO/CORK
Hannes Frederic Sowa [Sun, 27 Oct 2013 16:29:11 +0000 (17:29 +0100)]
ipv4: fix DO and PROBE pmtu mode regarding local fragmentation with UFO/CORK

UFO as well as UDP_CORK do not respect IP_PMTUDISC_DO and
IP_PMTUDISC_PROBE well enough.

UFO enabled packet delivery just appends all frags to the cork and hands
it over to the network card. So we just deliver non-DF udp fragments
(DF-flag may get overwritten by hardware or virtual UFO enabled
interface).

UDP_CORK does enqueue the data until the cork is disengaged. At this
point it sets the correct IP_DF and local_df flags and hands it over to
ip_fragment which in this case will generate an icmp error which gets
appended to the error socket queue. This is not reflected in the syscall
error (of course, if UFO is enabled this also won't happen).

Improve this by checking the pmtudisc flags before appending data to the
socket and if we still can fit all data in one packet when IP_PMTUDISC_DO
or IP_PMTUDISC_PROBE is set, only then proceed.

We use (mtu-fragheaderlen) to check for the maximum length because we
ensure not to generate a fragment and non-fragmented data does not need
to have its length aligned on 64 bit boundaries. Also the passed in
ip_options are already aligned correctly.

Maybe, we can relax some other checks around ip_fragment. This needs
more research.

Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agovirtio_net: migrate mergeable rx buffers to page frag allocators
Michael Dalton [Mon, 28 Oct 2013 22:44:18 +0000 (15:44 -0700)]
virtio_net: migrate mergeable rx buffers to page frag allocators

The virtio_net driver's mergeable receive buffer allocator
uses 4KB packet buffers. For MTU-sized traffic, SKB truesize
is > 4KB but only ~1500 bytes of the buffer is used to store
packet data, reducing the effective TCP window size
substantially. This patch addresses the performance concerns
with mergeable receive buffers by allocating MTU-sized packet
buffers using page frag allocators. If more than MAX_SKB_FRAGS
buffers are needed, the SKB frag_list is used.

Signed-off-by: Michael Dalton <mwdalton@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agoipv6: Remove privacy config option.
David S. Miller [Tue, 29 Oct 2013 00:07:50 +0000 (20:07 -0400)]
ipv6: Remove privacy config option.

The code for privacy extentions is very mature, and making it
configurable only gives marginal memory/code savings in exchange
for obfuscation and hard to read code via CPP ifdef'ery.

Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agoMerge branch '6lowpan'
David S. Miller [Mon, 28 Oct 2013 23:48:29 +0000 (19:48 -0400)]
Merge branch '6lowpan'

Alexander Aring says:

====================
6lowpan: trivial changes

This patch series includes some trivial changes to prepare the 6lowpan stack
for upcomming patch-series which mainly fix fragmentation according to rfc4944
and udp handling(which is currently broken).

Changes since v3:
  - really fix intendation in patch 3/5

Changes since v2:
  - change intendation in patch 3/5
  - fix typo in 5/5 unecessary -> unnecessary
  - add missing 6lowpan tag in cover-letter
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
11 years ago6lowpan: remove unnecessary break
Alexander Aring [Mon, 28 Oct 2013 09:24:20 +0000 (10:24 +0100)]
6lowpan: remove unnecessary break

Signed-off-by: Alexander Aring <alex.aring@gmail.com>
Reviewed-by: Werner Almesberger <werner@almesberger.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years ago6lowpan: remove skb->dev assignment
Alexander Aring [Mon, 28 Oct 2013 09:24:19 +0000 (10:24 +0100)]
6lowpan: remove skb->dev assignment

This patch removes the assignment of skb->dev. We don't need it here because
we use the netdev_alloc_skb_ip_align function which already sets the
skb->dev.

Signed-off-by: Alexander Aring <alex.aring@gmail.com>
Reviewed-by: Werner Almesberger <werner@almesberger.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years ago6lowpan: use netdev_alloc_skb instead dev_alloc_skb
Alexander Aring [Mon, 28 Oct 2013 09:24:18 +0000 (10:24 +0100)]
6lowpan: use netdev_alloc_skb instead dev_alloc_skb

This patch uses the netdev_alloc_skb instead dev_alloc_skb function and
drops the seperate assignment to skb->dev.

Signed-off-by: Alexander Aring <alex.aring@gmail.com>
Reviewed-by: Werner Almesberger <werner@almesberger.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years ago6lowpan: remove unnecessary check on err >= 0
Alexander Aring [Mon, 28 Oct 2013 09:24:17 +0000 (10:24 +0100)]
6lowpan: remove unnecessary check on err >= 0

The err variable can only be zero in this case.

Signed-off-by: Alexander Aring <alex.aring@gmail.com>
Reviewed-by: Werner Almesberger <werner@almesberger.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years ago6lowpan: remove unnecessary ret variable
Alexander Aring [Mon, 28 Oct 2013 09:24:16 +0000 (10:24 +0100)]
6lowpan: remove unnecessary ret variable

Signed-off-by: Alexander Aring <alex.aring@gmail.com>
Reviewed-by: Werner Almesberger <werner@almesberger.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agosctp: merge two if statements to one
wangweidong [Sat, 26 Oct 2013 08:06:32 +0000 (16:06 +0800)]
sctp: merge two if statements to one

Two if statements do the same work, we can merge them to
one. And fix some typos. There is just code simplification,
no functional changes.

Signed-off-by: Wang Weidong <wangweidong1@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agosctp: remove the repeat initialize with 0
wangweidong [Sat, 26 Oct 2013 08:06:31 +0000 (16:06 +0800)]
sctp: remove the repeat initialize with 0

kmem_cache_zalloc had set the allocated memory to zero. I think no need
to initialize with 0. And move the comments to the function begin.

Acked-by: Vlad Yasevich <vyasevich@gmail.com>
Signed-off-by: Wang Weidong <wangweidong1@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agosctp: fix some comments in chunk.c and associola.c
wangweidong [Sat, 26 Oct 2013 08:06:30 +0000 (16:06 +0800)]
sctp: fix some comments in chunk.c and associola.c

fix some typos

Acked-by: Vlad Yasevich <vyasevich@gmail.com>
Signed-off-by: Wang Weidong <wangweidong1@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agoveth: extend features to support tunneling
Eric Dumazet [Sat, 26 Oct 2013 01:25:03 +0000 (18:25 -0700)]
veth: extend features to support tunneling

While investigating on a recent vxlan regression, I found veth
was using a zero features set for vxlan tunnels.

We have to segment GSO frames, copy the payload, and do the checksum.

This patch brings a ~200% performance increase

We probably have to add hw_enc_features support
on other virtual devices.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agoinet: restore gso for vxlan
Eric Dumazet [Mon, 28 Oct 2013 01:18:16 +0000 (18:18 -0700)]
inet: restore gso for vxlan

Alexei reported a performance regression on vxlan, caused
by commit 3347c9602955 "ipv4: gso: make inet_gso_segment() stackable"

GSO vxlan packets were not properly segmented, adding IP fragments
while they were not expected.

Rename 'bool tunnel' to 'bool encap', and add a new boolean
to express the fact that UDP should be fragmented.
This fragmentation is triggered by skb->encapsulation being set.

Remove a "skb->encapsulation = 1" added in above commit,
as its not needed, as frags inherit skb->frag from original
GSO skb.

Reported-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Tested-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agoRevert "Merge branch 'bonding_monitor_locking'"
David S. Miller [Mon, 28 Oct 2013 04:11:22 +0000 (00:11 -0400)]
Revert "Merge branch 'bonding_monitor_locking'"

This reverts commit 4d961a101e032b4bf223b279b4b35bc77576f5a8, reversing
changes made to a00f6fcc7d0c62a91768d9c4ccba4c7d64fbbce3.

Revert bond locking changes, they cause regressions and Veaceslav Falico
doesn't like how the commit messages were done at all.

Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agobe2net: add support for ndo_busy_poll
Sathya Perla [Fri, 25 Oct 2013 05:10:16 +0000 (10:40 +0530)]
be2net: add support for ndo_busy_poll

Includes:
- ndo_busy_poll implementation
- Locking between napi and busy_poll
- Fix rx_post_starvation (replenish rx-queues in out-of-mememory scenario)
  logic to accomodate busy_poll.

v2 changes:
[Eric D.'s comment] call alloc_pages() with GFP_ATOMIC even in ndo_busy_poll
context as it is not allowed to sleep.

Signed-off-by: Sathya Perla <sathya.perla@emulex.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agoMerge branch 'bonding_monitor_locking'
David S. Miller [Sun, 27 Oct 2013 20:36:39 +0000 (16:36 -0400)]
Merge branch 'bonding_monitor_locking'

Ding Tianhong says:

====================
bonding: patchset for rcu use in bonding

The slave list will add and del by bond_master_upper_dev_link() and
bond_upper_dev_unlink(), which will call call_netdevice_notifiers(),
even it is safe to call it in write bond lock now, but we can't sure
that whether it is safe later, because other drivers may deal
NETDEV_CHANGEUPPER in sleep way, so I didn't admit move the
bond_upper_dev_unlink() in write bond lock.

now the bond_for_each_slave only protect by rtnl_lock(), maybe use
bond_for_each_slave_rcu is a good way to protect slave list for bond,
but as a system slow path, it is no need to transform
bond_for_each_slave() to bond_for_each_slave_rcu() in slow path, so in
the patchset, I will remove the unused read bond lock for monitor
function, maybe it is a better way, I will wait to accept any relay
for it.

Thanks for the Veaceslav Falico opinion.

v2: add and modify commit for patchset and patch, it will be the first
step for the whole patchset.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agobonding: remove bond read lock for bond_3ad_state_machine_handler()
dingtianhong [Thu, 24 Oct 2013 03:09:31 +0000 (11:09 +0800)]
bonding: remove bond read lock for bond_3ad_state_machine_handler()

The bond slave list may change when the monitor is running, the slave list is no longer
protected by bond->lock, only protected by rtnl lock(), so we have 3 ways to modify it:
1.add bond_master_upper_dev_link() and bond_upper_dev_unlink() in bond->lock, but it is unsafe
to call call_netdevice_notifiers() in write lock.
2.remove unused bond->lock for monitor function, only use the existing rtnl lock().
3.use rcu_read_lock() to protect it, of course, it will transform bond_for_each_slave to
bond_for_each_slave_rcu() and performance is better, but in slow path, it is ignored.
so I remove the bond->lock and move the rtnl lock to protect the whole monitor function.

Signed-off-by: Ding Tianhong <dingtianhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agobonding: remove bond read lock for bond_activebackup_arp_mon()
dingtianhong [Thu, 24 Oct 2013 03:09:25 +0000 (11:09 +0800)]
bonding: remove bond read lock for bond_activebackup_arp_mon()

The bond slave list may change when the monitor is running, the slave list is no longer
protected by bond->lock, only protected by rtnl lock(), so we have 3 ways to modify it:
1.add bond_master_upper_dev_link() and bond_upper_dev_unlink() in bond->lock, but it is unsafe
to call call_netdevice_notifiers() in write lock.
2.remove unused bond->lock for monitor function, only use the existing rtnl lock().
3.use rcu_read_lock() to protect it, of course, it will transform bond_for_each_slave to
bond_for_each_slave_rcu() and performance is better, but in slow path, it is ignored.
so I remove the bond->lock and move the rtnl lock to protect the whole monitor function.

Signed-off-by: Ding Tianhong <dingtianhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agobonding: remove bond read lock for bond_loadbalance_arp_mon()
dingtianhong [Thu, 24 Oct 2013 03:09:17 +0000 (11:09 +0800)]
bonding: remove bond read lock for bond_loadbalance_arp_mon()

The bond slave list may change when the monitor is running, the slave list is no longer
protected by bond->lock, only protected by rtnl lock(), so we have 3 ways to modify it:
1.add bond_master_upper_dev_link() and bond_upper_dev_unlink() in bond->lock, but it is unsafe
to call call_netdevice_notifiers() in write lock.
2.remove unused bond->lock for monitor function, only use the existing rtnl lock().
3.use rcu_read_lock() to protect it, of course, it will transform bond_for_each_slave to
bond_for_each_slave_rcu() and performance is better, but in slow path, it is ignored.
so I remove the bond->lock and add the rtnl lock to protect the whole monitor function.

Signed-off-by: Ding Tianhong <dingtianhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agobonding: remove bond read lock for bond_alb_monitor()
dingtianhong [Thu, 24 Oct 2013 03:09:12 +0000 (11:09 +0800)]
bonding: remove bond read lock for bond_alb_monitor()

The bond slave list may change when the monitor is running, the slave list is no longer
protected by bond->lock, only protected by rtnl lock(), so we have 3 ways to modify it:
1.add bond_master_upper_dev_link() and bond_upper_dev_unlink() in bond->lock, but it is unsafe
to call call_netdevice_notifiers() in write lock.
2.remove unused bond->lock for monitor function, only use the existing rtnl lock().
3.use rcu_read_lock() to protect it, of course, it will transform bond_for_each_slave to
bond_for_each_slave_rcu() and performance is better, but in slow path, it is ignored.
so I remove the bond->lock and move the rtnl lock to protect the whole monitor function.

Signed-off-by: Ding Tianhong <dingtianhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agobonding: remove bond read lock for bond_mii_monitor()
dingtianhong [Thu, 24 Oct 2013 03:09:03 +0000 (11:09 +0800)]
bonding: remove bond read lock for bond_mii_monitor()

The bond slave list may change when the monitor is running, the slave list is no longer
protected by bond->lock, only protected by rtnl lock(), so we have 3 ways to modify it:
1.add bond_master_upper_dev_link() and bond_upper_dev_unlink() in bond->lock, but it is unsafe
to call call_netdevice_notifiers() in write lock.
2.remove unused bond->lock for monitor function, only use the existing rtnl lock().
3.use rcu_read_lock() to protect it, of course, it will transform bond_for_each_slave to
bond_for_each_slave_rcu() and performance is better, but in slow path, it is ignored.
so I remove the bond->lock and move the rtnl lock to protect the whole monitor function.

Signed-off-by: Ding Tianhong <dingtianhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agoMerge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net...
David S. Miller [Sat, 26 Oct 2013 04:28:35 +0000 (00:28 -0400)]
Merge branch 'master' of git://git./linux/kernel/git/jkirsher/net-next

Jeff Kirsher says:

====================
Intel Wired LAN Driver Updates

This series contains updates to igb, igbvf, i40e, ixgbe and ixgbevf.

Dan Carpenter provides a patch for igbvf to fix a bug found by a static
checker.  If the new MTU is very large, then "new_mtu + ETH_HLEN +
ETH_FCS_LEN" can wrap and the check on the next line can underflow.

Wei Yongjun provides 2 patches, the first against igbvf adds a missing
iounmap() before the return from igbvf_probe().  The second against
i40e, removes the include <linux/version.h> because it is not needed.

Carolyn provides a patch for igb to fix a call to set the master/slave
mode for all m88 generation 2 PHY's and removes the call for I210
devices which do not need it.

Stefan Assmann provides a patch for igb to fix an issue which was broke
by:
   commit fa44f2f185f7f9da19d331929bb1b56c1ccd1d93
   Author: Greg Rose <gregory.v.rose@intel.com>
   Date:   Thu Jan 17 01:03:06 2013 -0800
   igb: Enable SR-IOV configuration via PCI sysfs interface
which breaks the reloading of igb when VFs are assigned to a guest, in
several ways.

Jacob provides a patch for ixgbe and ixgbevf.  First, against ixgbe,
cleans up ixgbe_enumerate_functions to reduce code complexity.  The
second, against ixgbevf, adds support for ethtool's get_coalesce and
set_coalesce command for the ixgbevf driver.

Yijing Wang provides a patch for ixgbe to use pcie_capability_read_word()
to simplify the code.

Emil provides a ixgbe patch to fix an issue where the logic used to
detect changes in rx-usecs was incorrect and was masked by the call to
ixgbe_update_rsc().

Don provides 2 patches for ixgbevf.  First creates a new function to set
PSRTYPE.  The second bumps the ixgbevf driver version.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agonet: fix rtnl notification in atomic context
Alexei Starovoitov [Wed, 23 Oct 2013 23:02:42 +0000 (16:02 -0700)]
net: fix rtnl notification in atomic context

commit 991fb3f74c "dev: always advertise rx_flags changes via netlink"
introduced rtnl notification from __dev_set_promiscuity(),
which can be called in atomic context.

Steps to reproduce:
ip tuntap add dev tap1 mode tap
ifconfig tap1 up
tcpdump -nei tap1 &
ip tuntap del dev tap1 mode tap

[  271.627994] device tap1 left promiscuous mode
[  271.639897] BUG: sleeping function called from invalid context at mm/slub.c:940
[  271.664491] in_atomic(): 1, irqs_disabled(): 0, pid: 3394, name: ip
[  271.677525] INFO: lockdep is turned off.
[  271.690503] CPU: 0 PID: 3394 Comm: ip Tainted: G        W    3.12.0-rc3+ #73
[  271.703996] Hardware name: System manufacturer System Product Name/P8Z77 WS, BIOS 3007 07/26/2012
[  271.731254]  ffffffff81a58506 ffff8807f0d57a58 ffffffff817544e5 ffff88082fa0f428
[  271.760261]  ffff8808071f5f40 ffff8807f0d57a88 ffffffff8108bad1 ffffffff81110ff8
[  271.790683]  0000000000000010 00000000000000d0 00000000000000d0 ffff8807f0d57af8
[  271.822332] Call Trace:
[  271.838234]  [<ffffffff817544e5>] dump_stack+0x55/0x76
[  271.854446]  [<ffffffff8108bad1>] __might_sleep+0x181/0x240
[  271.870836]  [<ffffffff81110ff8>] ? rcu_irq_exit+0x68/0xb0
[  271.887076]  [<ffffffff811a80be>] kmem_cache_alloc_node+0x4e/0x2a0
[  271.903368]  [<ffffffff810b4ddc>] ? vprintk_emit+0x1dc/0x5a0
[  271.919716]  [<ffffffff81614d67>] ? __alloc_skb+0x57/0x2a0
[  271.936088]  [<ffffffff810b4de0>] ? vprintk_emit+0x1e0/0x5a0
[  271.952504]  [<ffffffff81614d67>] __alloc_skb+0x57/0x2a0
[  271.968902]  [<ffffffff8163a0b2>] rtmsg_ifinfo+0x52/0x100
[  271.985302]  [<ffffffff8162ac6d>] __dev_notify_flags+0xad/0xc0
[  272.001642]  [<ffffffff8162ad0c>] __dev_set_promiscuity+0x8c/0x1c0
[  272.017917]  [<ffffffff81731ea5>] ? packet_notifier+0x5/0x380
[  272.033961]  [<ffffffff8162b109>] dev_set_promiscuity+0x29/0x50
[  272.049855]  [<ffffffff8172e937>] packet_dev_mc+0x87/0xc0
[  272.065494]  [<ffffffff81732052>] packet_notifier+0x1b2/0x380
[  272.080915]  [<ffffffff81731ea5>] ? packet_notifier+0x5/0x380
[  272.096009]  [<ffffffff81761c66>] notifier_call_chain+0x66/0x150
[  272.110803]  [<ffffffff8108503e>] __raw_notifier_call_chain+0xe/0x10
[  272.125468]  [<ffffffff81085056>] raw_notifier_call_chain+0x16/0x20
[  272.139984]  [<ffffffff81620190>] call_netdevice_notifiers_info+0x40/0x70
[  272.154523]  [<ffffffff816201d6>] call_netdevice_notifiers+0x16/0x20
[  272.168552]  [<ffffffff816224c5>] rollback_registered_many+0x145/0x240
[  272.182263]  [<ffffffff81622641>] rollback_registered+0x31/0x40
[  272.195369]  [<ffffffff816229c8>] unregister_netdevice_queue+0x58/0x90
[  272.208230]  [<ffffffff81547ca0>] __tun_detach+0x140/0x340
[  272.220686]  [<ffffffff81547ed6>] tun_chr_close+0x36/0x60

Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Acked-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agonet: initialize hashrnd in flow_dissector with net_get_random_once
Hannes Frederic Sowa [Wed, 23 Oct 2013 18:06:00 +0000 (20:06 +0200)]
net: initialize hashrnd in flow_dissector with net_get_random_once

We also can defer the initialization of hashrnd in flow_dissector
to its first use. Since net_get_random_once is irq safe now we don't
have to audit the call paths if one of this functions get called by an
interrupt handler.

Cc: David S. Miller <davem@davemloft.net>
Cc: Eric Dumazet <edumazet@google.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agonet: make net_get_random_once irq safe
Hannes Frederic Sowa [Wed, 23 Oct 2013 18:05:27 +0000 (20:05 +0200)]
net: make net_get_random_once irq safe

I initial build non irq safe version of net_get_random_once because I
would liked to have the freedom to defer even the extraction process of
get_random_bytes until the nonblocking pool is fully seeded.

I don't think this is a good idea anymore and thus this patch makes
net_get_random_once irq safe. Now someone using net_get_random_once does
not need to care from where it is called.

Cc: David S. Miller <davem@davemloft.net>
Cc: Eric Dumazet <edumazet@google.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agonet: add missing dev_put() in __netdev_adjacent_dev_insert
Nikolay Aleksandrov [Wed, 23 Oct 2013 13:28:56 +0000 (15:28 +0200)]
net: add missing dev_put() in __netdev_adjacent_dev_insert

I think that a dev_put() is needed in the error path to preserve the
proper dev refcount.

CC: Veaceslav Falico <vfalico@redhat.com>
Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com>
Acked-by: Veaceslav Falico <vfalico@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agonetem: markov loss model transition fix
Hagen Paul Pfeifer [Tue, 22 Oct 2013 21:27:06 +0000 (23:27 +0200)]
netem: markov loss model transition fix

The transition from markov state "3 => lost packets within a burst
period" to "1 => successfully transmitted packets within a gap period"
has no *additional* loss event. The loss already happen for transition
from 1 -> 3, this additional loss will make things go wild.

E.g. transition probabilities:

p13:   10%
p31:  100%

Expected:

Ploss = p13 / (p13 + p31)
Ploss = ~9.09%

... but it isn't. Even worse: we get a double loss - each time.
So simple don't return true to indicate loss, rather break and return
false.

Signed-off-by: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: Stephen Hemminger <stephen@networkplumber.org>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Stefano Salsano <stefano.salsano@uniroma2.it>
Cc: Fabio Ludovici <fabio.ludovici@yahoo.it>
Signed-off-by: Hagen Paul Pfeifer <hagen@jauu.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agoixgbevf: bump driver version
Don Skidmore [Sat, 21 Sep 2013 05:21:18 +0000 (05:21 +0000)]
ixgbevf: bump driver version

Bump patch to reflect what version of the out of tree driver it has
equivalent functionality with (2.11.3).

Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
11 years agoixgbevf: implement ethtool get/set coalesce
Jacob Keller [Tue, 22 Oct 2013 06:19:18 +0000 (06:19 +0000)]
ixgbevf: implement ethtool get/set coalesce

This patch adds support for ethtool's get_coalesce and set_coalesce command for
the ixgbevf driver. This enables dynamically updating the minimum time between
interrupts.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
11 years agoixgbevf: Adds function to set PSRTYPE register
Don Skidmore [Sat, 21 Sep 2013 01:57:33 +0000 (01:57 +0000)]
ixgbevf: Adds function to set PSRTYPE register

This patch creates a new function to set PSRTYPE. This function helps lay
the ground work for eventual multi queue support.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
11 years agoixgbe: fix rx-usecs range checks for BQL
Emil Tantilov [Tue, 22 Oct 2013 08:21:04 +0000 (08:21 +0000)]
ixgbe: fix rx-usecs range checks for BQL

This patch resolves an issue where the logic used to detect changes in rx-usecs
was incorrect and was masked by the call to ixgbe_update_rsc().

Setting rx-usecs between 0,2-9 and 1,10 and up requires a reset to allow
ixgbe_configure_tx_ring() to set the correct value for TXDCTL.WTHRESH in
order to avoid Tx hangs with BQL enabled.

Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
11 years agoixgbe: use pcie_capability_read_word() to simplify code
Yijing Wang [Wed, 4 Sep 2013 17:30:08 +0000 (17:30 +0000)]
ixgbe: use pcie_capability_read_word() to simplify code

use pcie_capability_read_word() to simplify code.

Signed-off-by: Yijing Wang <wangyijing@huawei.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
11 years agoixgbe: cleanup ixgbe_enumerate_functions
Jacob Keller [Sat, 31 Aug 2013 02:45:38 +0000 (02:45 +0000)]
ixgbe: cleanup ixgbe_enumerate_functions

This function previously had the same check as used by the
ixgbe_pcie_from_parent. As the hardcode is due to the device having an internal
switch, this function should simply use the call from ixgbe_pcie_from_parent.
This reduces code complexity and makes it less likely a developer will forget
to update the list in the future.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
11 years agoigb: fix driver reload with VF assigned to guest
Stefan Assmann [Tue, 24 Sep 2013 05:18:39 +0000 (05:18 +0000)]
igb: fix driver reload with VF assigned to guest

commit fa44f2f185f7f9da19d331929bb1b56c1ccd1d93 broke reloading of igb, when
VFs are assigned to a guest, in several ways.
1. on module load adapter->vf_data does not get properly allocated,
resulting in a null pointer exception when accessing adapter->vf_data in
igb_reset() on module reload.
 modprobe -r igb ; modprobe igb max_vfs=7
[  215.215837] igb 0000:01:00.1: removed PHC on eth1
[  216.932072] igb 0000:01:00.1: IOV Disabled
[  216.937038] igb 0000:01:00.0: removed PHC on eth0
[  217.127032] igb 0000:01:00.0: Cannot deallocate SR-IOV virtual functions while they are assigned - VFs will not be deallocated
[  217.146178] igb: Intel(R) Gigabit Ethernet Network Driver - version 5.0.5-k
[  217.154050] igb: Copyright (c) 2007-2013 Intel Corporation.
[  217.160688] igb 0000:01:00.0: Enabling SR-IOV VFs using the module parameter is deprecated - please use the pci sysfs interface.
[  217.173703] igb 0000:01:00.0: irq 103 for MSI/MSI-X
[  217.179227] igb 0000:01:00.0: irq 104 for MSI/MSI-X
[  217.184735] igb 0000:01:00.0: irq 105 for MSI/MSI-X
[  217.220082] BUG: unable to handle kernel NULL pointer dereference at 0000000000000048
[  217.228846] IP: [<ffffffffa007c5e5>] igb_reset+0xc5/0x4b0 [igb]
[  217.235472] PGD 3607ec067 PUD 36170b067 PMD 0
[  217.240461] Oops: 0002 [#1] SMP
[  217.244085] Modules linked in: igb(+) igbvf mptsas mptscsih mptbase scsi_transport_sas [last unloaded: igb]
[  217.255040] CPU: 4 PID: 4833 Comm: modprobe Not tainted 3.11.0+ #46
[...]
[  217.390007]  [<ffffffffa007fab2>] igb_probe+0x892/0xfd0 [igb]
[  217.396422]  [<ffffffff81470b3e>] local_pci_probe+0x1e/0x40
[  217.402641]  [<ffffffff81472029>] pci_device_probe+0xf9/0x110
[...]
2. A follow up issue, pci_enable_sriov() should only be called if no VFs were
still allocated on module unload. Otherwise pci_enable_sriov() gets called
multiple times in a row rendering the NIC unusable until reset.
3. simply calling igb_enable_sriov() in igb_probe_vfs() is not enough as the
interrupts need to be re-setup. Switching that to igb_pci_enable_sriov().

Signed-off-by: Stefan Assmann <sassmann@kpanic.de>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Tested-by: Sibai Li <Sibai.li@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
11 years agoigb: Fix master/slave mode for all m88 i354 PHY's
Carolyn Wyborny [Fri, 16 Aug 2013 00:39:10 +0000 (00:39 +0000)]
igb: Fix master/slave mode for all m88 i354 PHY's

This patch calls code to set the master/slave mode for all m88 gen 2
PHY's. This patch also removes the call to this function for I210 devices
only from the function that is not called by I210 devices.

Signed-off-by: Carolyn Wyborny <carolyn.wyborny@intel.com>
Tested-by: Jeff Pieper <jeffrey.e.pieper@gmail.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
11 years agoi40e: remove unused including <linux/version.h>
Wei Yongjun [Tue, 24 Sep 2013 05:17:31 +0000 (05:17 +0000)]
i40e: remove unused including <linux/version.h>

Remove including <linux/version.h> that don't need it.

Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Tested-by: Kavindya Deegala <kavindya.s.deegala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
11 years agoigbvf: add missing iounmap() on error in igbvf_probe()
Wei Yongjun [Tue, 24 Sep 2013 05:18:45 +0000 (05:18 +0000)]
igbvf: add missing iounmap() on error in igbvf_probe()

Add the missing iounmap() before return from igbvf_probe()
in the error handling case.

Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Tested-by: Sibai Li <Sibai.li@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
11 years agoigbvf: integer wrapping bug setting the mtu
Dan Carpenter [Fri, 13 Sep 2013 20:44:20 +0000 (20:44 +0000)]
igbvf: integer wrapping bug setting the mtu

If new_mtu is very large then "new_mtu + ETH_HLEN + ETH_FCS_LEN" can
wrap and the check on the next line can underflow. This is one of those
bugs which can be triggered by the user if you have namespaces
configured.

Also since this is something the user can trigger then we don't want to
have dev_err() message.

This is a static checker fix and I'm not sure what the impact is.

Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Tested-by: Sibai Li Sibai.li@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
11 years agoMerge tag 'batman-adv-for-davem' of git://git.open-mesh.org/linux-merge
David S. Miller [Wed, 23 Oct 2013 21:12:33 +0000 (17:12 -0400)]
Merge tag 'batman-adv-for-davem' of git://git.open-mesh.org/linux-merge

Antonio Quartulli says:

====================
this is another set of changes intended for net-next/linux-3.13.
(probably our last pull request for this cycle)

Patches 1 and 2 reshape two of our main data structures in a way that they can
easily be extended in the future to accommodate new routing protocols.

Patches from 3 to 9 improve our routing protocol API and its users so that all
the protocol-related code is not mixed up with the other components anymore.

Patch 10 limits the local Translation Table maximum size to a value such that it
can be fully transfered over the air if needed. This value depends on
fragmentation being enabled or not and on the mtu values.

Patch 11 makes batman-adv send a uevent in case of soft-interface destruction
while a "bat-Gateway" was configured (this informs userspace about the GW not
being available anymore).

Patches 13 and 14 enable the TT component to detect non-mesh client flag
changes at runtime (till now those flags where set upon client detection and
were not changed anymore).

Patch 16 is a generalisation of our user-to-kernel space communication (and
viceversa) used to exchange ICMP packets to send/received to/from the mesh
network. Now it can easily accommodate new ICMP packet types without breaking
the existing userspace API anymore.

Remaining patches are minor changes and cleanups.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agoMerge branch 'frag_hash_secret'
David S. Miller [Wed, 23 Oct 2013 21:01:51 +0000 (17:01 -0400)]
Merge branch 'frag_hash_secret'

Hannes Frederic Sowa says:

====================
initialize fragment hash secrets with net_get_random_once

This series switches the inet_frag.rnd hash initialization to
net_get_random_once.

Included patches:
 ipv4: initialize ip4_frags hash secret as late
 ipv6: split inet6_hash_frag for netfilter and
 inet: remove old fragmentation hash initializing
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agoinet: remove old fragmentation hash initializing
Hannes Frederic Sowa [Wed, 23 Oct 2013 09:06:57 +0000 (11:06 +0200)]
inet: remove old fragmentation hash initializing

All fragmentation hash secrets now get initialized by their
corresponding hash function with net_get_random_once. Thus we can
eliminate the initial seeding.

Also provide a comment that hash secret seeding happens at the first
call to the corresponding hashing function.

Cc: David S. Miller <davem@davemloft.net>
Cc: Eric Dumazet <edumazet@google.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agoipv6: split inet6_hash_frag for netfilter and initialize secrets with net_get_random_once
Hannes Frederic Sowa [Wed, 23 Oct 2013 09:06:56 +0000 (11:06 +0200)]
ipv6: split inet6_hash_frag for netfilter and initialize secrets with net_get_random_once

Defer the fragmentation hash secret initialization for IPv6 like the
previous patch did for IPv4.

Because the netfilter logic reuses the hash secret we have to split it
first. Thus introduce a new nf_hash_frag function which takes care to
seed the hash secret.

Cc: David S. Miller <davem@davemloft.net>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agoipv4: initialize ip4_frags hash secret as late as possible
Hannes Frederic Sowa [Wed, 23 Oct 2013 09:06:55 +0000 (11:06 +0200)]
ipv4: initialize ip4_frags hash secret as late as possible

Defer the generation of the first hash secret for the ipv4 fragmentation
cache as late as possible.

ip4_frags.rnd gets initial seeded by inet_frags_init and regulary
reseeded by inet_frag_secret_rebuild. Either we call ipqhashfn directly
from ip_fragment.c in which case we initialize the secret directly.

If we first get called by inet_frag_secret_rebuild we install a new secret
by a manual call to get_random_bytes. This secret will be overwritten
as soon as the first call to ipqhashfn happens. This is safe because we
won't race while publishing the new secrets with anyone else.

Cc: Eric Dumazet <edumazet@google.com>
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agoMerge branch 'pci_set_drvdata'
David S. Miller [Wed, 23 Oct 2013 20:58:52 +0000 (16:58 -0400)]
Merge branch 'pci_set_drvdata'

Jingoo Han syas:

====================
ethernet: remove unnecessary pci_set_drvdata() part 4

Since commit 0998d0631001288a5974afc0b2a5f568bcdecb4d
(device-core: Ensure drvdata = NULL when no driver is bound),
the driver core clears the driver data to NULL after device_release
or on probe failure. Thus, it is not needed to manually clear the
device driver data to NULL.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agonet: via-rhine: remove unnecessary pci_set_drvdata()
Jingoo Han [Wed, 23 Oct 2013 07:09:56 +0000 (16:09 +0900)]
net: via-rhine: remove unnecessary pci_set_drvdata()

The driver core clears the driver data to NULL after device_release
or on probe failure. Thus, it is not needed to manually clear the
device driver data to NULL.

Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agonet: tc35815: remove unnecessary pci_set_drvdata()
Jingoo Han [Wed, 23 Oct 2013 07:09:32 +0000 (16:09 +0900)]
net: tc35815: remove unnecessary pci_set_drvdata()

The driver core clears the driver data to NULL after device_release
or on probe failure. Thus, it is not needed to manually clear the
device driver data to NULL.

Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agonet: spider_net: remove unnecessary pci_set_drvdata()
Jingoo Han [Wed, 23 Oct 2013 07:09:03 +0000 (16:09 +0900)]
net: spider_net: remove unnecessary pci_set_drvdata()

The driver core clears the driver data to NULL after device_release
or on probe failure. Thus, it is not needed to manually clear the
device driver data to NULL.

Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agonet: tlan: remove unnecessary pci_set_drvdata()
Jingoo Han [Wed, 23 Oct 2013 07:08:32 +0000 (16:08 +0900)]
net: tlan: remove unnecessary pci_set_drvdata()

The driver core clears the driver data to NULL after device_release
or on probe failure. Thus, it is not needed to manually clear the
device driver data to NULL.

Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agonet: tehuti: remove unnecessary pci_set_drvdata()
Jingoo Han [Wed, 23 Oct 2013 07:07:51 +0000 (16:07 +0900)]
net: tehuti: remove unnecessary pci_set_drvdata()

The driver core clears the driver data to NULL after device_release
or on probe failure. Thus, it is not needed to manually clear the
device driver data to NULL.

Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agonet: niu: remove unnecessary pci_set_drvdata()
Jingoo Han [Wed, 23 Oct 2013 07:07:19 +0000 (16:07 +0900)]
net: niu: remove unnecessary pci_set_drvdata()

The driver core clears the driver data to NULL after device_release
or on probe failure. Thus, it is not needed to manually clear the
device driver data to NULL.

Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agonet: sungem: remove unnecessary pci_set_drvdata()
Jingoo Han [Wed, 23 Oct 2013 07:06:54 +0000 (16:06 +0900)]
net: sungem: remove unnecessary pci_set_drvdata()

The driver core clears the driver data to NULL after device_release
or on probe failure. Thus, it is not needed to manually clear the
device driver data to NULL.

Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agosh_eth: add/use RMCR.RNC bit
Sergei Shtylyov [Tue, 15 Oct 2013 22:29:58 +0000 (02:29 +0400)]
sh_eth: add/use RMCR.RNC bit

Declare 'enum RMCR_BIT' containing the single member for the RMCR.RNC bit and
replace bare numbers in the driver by  this mnemonic.

Suggested-by: David Miller <davem@davemloft.net>
Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Reviewed-by: Simon Horman <horms+renesas@verge.net.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agoMerge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
David S. Miller [Wed, 23 Oct 2013 20:28:39 +0000 (16:28 -0400)]
Merge git://git./linux/kernel/git/davem/net

Conflicts:
drivers/net/usb/qmi_wwan.c
include/net/dst.h

Trivial merge conflicts, both were overlapping changes.

Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agonet: always inline net_secret_init
Hannes Frederic Sowa [Wed, 23 Oct 2013 06:44:50 +0000 (08:44 +0200)]
net: always inline net_secret_init

Currently net_secret_init does not get inlined, so we always have a call
to net_secret_init even in the fast path.

Let's specify net_secret_init as __always_inline so we have the nop in
the fast-path without the call to net_secret_init and the unlikely path
at the epilogue of the function.

jump_labels handle the inlining correctly.

Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agonet: Dereference pointer-value of sk_prot->memory_pressure
Christoph Paasch [Wed, 23 Oct 2013 19:49:21 +0000 (12:49 -0700)]
net: Dereference pointer-value of sk_prot->memory_pressure

2e685cad57 (tcp_memcontrol: Kill struct tcp_memcontrol) falsly modified
the access to memory_pressure of sk->sk_prot->memory_pressure. The patch
did modify the memory_pressure-field of struct cg_proto, but not the one
of struct proto.

So, the access to sk_prot->memory_pressure should not be changed.

Acked-by: Eric Dumazet <edumazet@google.com>
Reported-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: Christoph Paasch <christoph.paasch@uclouvain.be>
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 years agobatman-adv: generalize batman-adv icmp packet handling
Simon Wunderlich [Tue, 22 Oct 2013 20:50:09 +0000 (22:50 +0200)]
batman-adv: generalize batman-adv icmp packet handling

Instead of handling icmp packets only up to length of icmp_packet_rr,
the code should handle any icmp length size. Therefore the length
truncating is moved to when the packet is actually sent to userspace
(this does not support lengths longer than icmp_packet_rr yet). Longer
packets are forwarded without truncating.

This patch also cleans up some parts where the icmp header struct could
be used instead of other icmp_packet(_rr) structs to make the code more
readable.

Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
Signed-off-by: Antonio Quartulli <antonio@meshcoding.com>
11 years agobatman-adv: Start new development cycle
Simon Wunderlich [Mon, 14 Oct 2013 16:01:01 +0000 (18:01 +0200)]
batman-adv: Start new development cycle

Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Antonio Quartulli <antonio@meshcoding.com>
11 years agobatman-adv: include the sync-flags when compute the global/local table CRC
Antonio Quartulli [Sun, 13 Oct 2013 00:50:20 +0000 (02:50 +0200)]
batman-adv: include the sync-flags when compute the global/local table CRC

Flags covered by TT_SYNC_MASK are kept in sync among the
nodes in the network and therefore they have to be
considered while computing the global/local table CRC.

In this way a generic originator is able to understand if
its table contains the correct flags or not.

Bits from 4 to 7 in the TT flags fields are now reserved for
"synchronized" flags only.

This allows future developers to add more flags of this type
without breaking compatibility.

It's important to note that not all the remote TT flags are
synchronised. This comes from the fact that some flags are
used to inject an information once only.

Signed-off-by: Antonio Quartulli <antonio@open-mesh.com>
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
11 years agobatman-adv: improve the TT component to support runtime flag changes
Antonio Quartulli [Sun, 13 Oct 2013 00:50:19 +0000 (02:50 +0200)]
batman-adv: improve the TT component to support runtime flag changes

Some flags (i.e. the WIFI flag) may change after that the
related client has already been announced. However it is
useful to informa the rest of the network about this change.

Add a runtime-flag-switch detection mechanism and
re-announce the related TT entry to advertise the new flag
value.

This mechanism can be easily exploited by future flags that
may need the same treatment.

Signed-off-by: Antonio Quartulli <antonio@open-mesh.com>
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
11 years agobatman-adv: invoke dev_get_by_index() outside of is_wifi_iface()
Antonio Quartulli [Sun, 13 Oct 2013 00:50:18 +0000 (02:50 +0200)]
batman-adv: invoke dev_get_by_index() outside of is_wifi_iface()

Upcoming changes need to perform other checks on the
incoming net_device struct.

To avoid performing dev_get_by_index() for each and every
check, it is better to move it outside of is_wifi_iface()
and search the netdev object once only.

Signed-off-by: Antonio Quartulli <antonio@open-mesh.com>
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
11 years agobatman-adv: send GW_DEL event in case of soft-iface destruction
Antonio Quartulli [Mon, 19 Aug 2013 16:39:59 +0000 (18:39 +0200)]
batman-adv: send GW_DEL event in case of soft-iface destruction

In case of soft_iface destruction send a GW DEL event to
userspace so that applications which are listening for GW
events are informed about the lost of connectivity and can
react accordingly.

Signed-off-by: Antonio Quartulli <antonio@open-mesh.com>
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
11 years agobatman-adv: limit local translation table max size
Marek Lindner [Mon, 27 May 2013 07:33:25 +0000 (15:33 +0800)]
batman-adv: limit local translation table max size

The local translation table size is limited by what can be
transferred from one node to another via a full table request.

The number of entries fitting into a full table request depend
on whether the fragmentation is enabled or not. Therefore this
patch introduces a max table size check and refuses to add
more local clients when that size is reached. Moreover, if the
max full table packet size changes (MTU change or fragmentation
is disabled) the local table is downsized instantaneously.

Signed-off-by: Marek Lindner <lindner_marek@yahoo.de>
Acked-by: Antonio Quartulli <ordex@autistici.org>