GitHub/LineageOS/android_kernel_motorola_exynos9610.git
9 years agonet: introduce sk_ehashfn() helper
Eric Dumazet [Wed, 18 Mar 2015 21:05:34 +0000 (14:05 -0700)]
net: introduce sk_ehashfn() helper

Goal is to unify IPv4/IPv6 inet_hash handling, and use common helpers
for all kind of sockets (full sockets, timewait and request sockets)

inet_sk_ehashfn() becomes sk_ehashfn() but still only copes with IPv4

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agonetns: constify net_hash_mix() and various callers
Eric Dumazet [Wed, 18 Mar 2015 21:05:33 +0000 (14:05 -0700)]
netns: constify net_hash_mix() and various callers

const qualifiers ease code review by making clear
which objects are not written in a function.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoMerge branch 'txq_max_rate'
David S. Miller [Wed, 18 Mar 2015 18:55:23 +0000 (14:55 -0400)]
Merge branch 'txq_max_rate'

Or Gerlitz says:

====================
Add max rate TXQ attribute

Add the ability to set a max-rate limitation for TX queues.
The attribute name is maxrate and the units are Mbs, to make
it similar to the existing max-rate limitation knobs (ETS and
SRIOV ndo calls).

changes from V2:
  - added Documentation (thanks Florian and Tom)
  - rebased to latest net-next to comply with the swdev ndo removal
  - addressed more feedback from Dave on the comments style

changes from V1:
  - addressed feedback from Dave

changes from V0:
  - addressed feedback from Sergei

John Fastabend (1):
  net: Add max rate tx queue attribute
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agonet/mlx4_en: Add tx queue maxrate support
Or Gerlitz [Wed, 18 Mar 2015 12:57:35 +0000 (14:57 +0200)]
net/mlx4_en: Add tx queue maxrate support

Add ndo_set_tx_maxrate support.

To support per tx queue maxrate limit, we use the update-qp firmware
command to do run-time rate setting for the qp that serves this tx ring.

Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Ido Shamay <idos@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agonet/mlx4_core: Add basic support for QP max-rate limiting
Or Gerlitz [Wed, 18 Mar 2015 12:57:34 +0000 (14:57 +0200)]
net/mlx4_core: Add basic support for QP max-rate limiting

Add the low-level device commands and definitions used for QP max-rate limiting.

This is done through the following elements:

  - read rate-limit device caps in QUERY_DEV_CAP: number of different
    rates and the min/max rates in Kbs/Mbs/Gbs units

  - enhance the QP context struct to contain rate limit units and value

  - allow to do run time rate-limit setting to QPs through the
    update-qp firmware command

  - QP rate-limiting is disallowed for VFs

Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agonet: Add max rate tx queue attribute
John Fastabend [Wed, 18 Mar 2015 12:57:33 +0000 (14:57 +0200)]
net: Add max rate tx queue attribute

This adds a tx_maxrate attribute to the tx queue sysfs entry allowing
for max-rate limiting. Along with DCB-ETS and BQL this provides another
knob to tune queue performance. The limit units are Mbps.

By default it is disabled. To disable the rate limitation after it
has been set for a queue, it should be set to zero.

Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoMerge branch 'rhashtable_remove_shift'
David S. Miller [Wed, 18 Mar 2015 16:46:48 +0000 (12:46 -0400)]
Merge branch 'rhashtable_remove_shift'

Herbert Xu says:

====================
rhashtable: Kill redundant shift parameter

I was trying to squeeze bucket_table->rehash in by downsizing
bucket_table->size, only to find that my spot had been taken
over by bucket_table->shift.  These patches kill shift and makes
me feel better :)

v2 corrects the typo in the test_rhashtable changelog and also
notes the min_shift parameter in the tipc patch changelog.
====================

Acked-by: Thomas Graf <tgraf@suug.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agorhashtable: Remove max_shift and min_shift
Herbert Xu [Wed, 18 Mar 2015 09:01:21 +0000 (20:01 +1100)]
rhashtable: Remove max_shift and min_shift

Now that nobody uses max_shift and min_shift, we can safely remove
them.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agotest_rhashtable: Use rhashtable max_size instead of max_shift
Herbert Xu [Wed, 18 Mar 2015 09:01:19 +0000 (20:01 +1100)]
test_rhashtable: Use rhashtable max_size instead of max_shift

This patch converts test_rhashtable to use rhashtable max_size
instead of the obsolete max_shift.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agotipc: Use rhashtable max/min_size instead of max/min_shift
Herbert Xu [Wed, 18 Mar 2015 09:01:18 +0000 (20:01 +1100)]
tipc: Use rhashtable max/min_size instead of max/min_shift

This patch converts tipc to use rhashtable max/min_size instead of
the obsolete max/min_shift.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agonetlink: Use rhashtable max_size instead of max_shift
Herbert Xu [Wed, 18 Mar 2015 09:01:17 +0000 (20:01 +1100)]
netlink: Use rhashtable max_size instead of max_shift

This patch converts netlink to use rhashtable max_size instead
of the obsolete max_shift.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agorhashtable: Introduce max_size/min_size
Herbert Xu [Wed, 18 Mar 2015 09:01:16 +0000 (20:01 +1100)]
rhashtable: Introduce max_size/min_size

This patch adds the parameters max_size and min_size which are
meant to replace max_shift and min_shift.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agorhashtable: Remove shift from bucket_table
Herbert Xu [Wed, 18 Mar 2015 09:01:15 +0000 (20:01 +1100)]
rhashtable: Remove shift from bucket_table

Keeping both size and shift is silly.  We only need one.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoMerge branch 'xgene-next'
David S. Miller [Wed, 18 Mar 2015 16:44:13 +0000 (12:44 -0400)]
Merge branch 'xgene-next'

Keyur Chudgar says:

====================
drivers: net: xgene: Add second SGMII based 1G interface

This patch adds support for second SGMII based 1G interface.
====================

Signed-off-by: Keyur Chudgar <kchudgar@apm.com>
Signed-off-by: Iyappan Subramanian <isubramanian@apm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agodrivers: net: xgene: Add second SGMII based 1G interface
Keyur Chudgar [Tue, 17 Mar 2015 18:27:13 +0000 (11:27 -0700)]
drivers: net: xgene: Add second SGMII based 1G interface

- Added resource initialization based on port-id field
- Enabled second SGMII 1G interface

Signed-off-by: Keyur Chudgar <kchudgar@apm.com>
Signed-off-by: Iyappan Subramanian <isubramanian@apm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agodtb: xgene: Add second SGMII based 1G interface node
Keyur Chudgar [Tue, 17 Mar 2015 18:27:12 +0000 (11:27 -0700)]
dtb: xgene: Add second SGMII based 1G interface node

- Added new SGMII node for port 1
- Added port-id field

Signed-off-by: Keyur Chudgar <kchudgar@apm.com>
Signed-off-by: Iyappan Subramanian <isubramanian@apm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoDocumentation: dtb: Add port-id field for APM X-Gene ethernet
Keyur Chudgar [Tue, 17 Mar 2015 18:27:11 +0000 (11:27 -0700)]
Documentation: dtb: Add port-id field for APM X-Gene ethernet

Signed-off-by: Keyur Chudgar <kchudgar@apm.com>
Signed-off-by: Iyappan Subramanian <isubramanian@apm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoMerge branch 'tipc_netns_leak'
David S. Miller [Wed, 18 Mar 2015 02:12:10 +0000 (22:12 -0400)]
Merge branch 'tipc_netns_leak'

Ying Xue says:

====================
tipc: fix netns refcnt leak

The series aims to eliminate the issue of netns refcount leak. But
during fixing it, another two additional problems are found. So all
of known issues associated with the netns refcnt leak are resolved
at the same time in the patchset.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agotipc: withdraw tipc topology server name when namespace is deleted
Ying Xue [Wed, 18 Mar 2015 01:32:59 +0000 (09:32 +0800)]
tipc: withdraw tipc topology server name when namespace is deleted

The TIPC topology server is a per namespace service associated with the
tipc name {1, 1}. When a namespace is deleted, that name must be withdrawn
before we call sk_release_kernel because the kernel socket release is
done in init_net and trying to withdraw a TIPC name published in another
namespace will fail with an error as:

[  170.093264] Unable to remove local publication
[  170.093264] (type=1, lower=1, ref=2184244004, key=2184244005)

We fix this by breaking the association between the topology server name
and socket before calling sk_release_kernel.

Signed-off-by: Ying Xue <ying.xue@windriver.com>
Reviewed-by: Erik Hugne <erik.hugne@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agotipc: fix a potential deadlock when nametable is purged
Ying Xue [Wed, 18 Mar 2015 01:32:58 +0000 (09:32 +0800)]
tipc: fix a potential deadlock when nametable is purged

[   28.531768] =============================================
[   28.532322] [ INFO: possible recursive locking detected ]
[   28.532322] 3.19.0+ #194 Not tainted
[   28.532322] ---------------------------------------------
[   28.532322] insmod/583 is trying to acquire lock:
[   28.532322]  (&(&nseq->lock)->rlock){+.....}, at: [<ffffffffa000d219>] tipc_nametbl_remove_publ+0x49/0x2e0 [tipc]
[   28.532322]
[   28.532322] but task is already holding lock:
[   28.532322]  (&(&nseq->lock)->rlock){+.....}, at: [<ffffffffa000e0dc>] tipc_nametbl_stop+0xfc/0x1f0 [tipc]
[   28.532322]
[   28.532322] other info that might help us debug this:
[   28.532322]  Possible unsafe locking scenario:
[   28.532322]
[   28.532322]        CPU0
[   28.532322]        ----
[   28.532322]   lock(&(&nseq->lock)->rlock);
[   28.532322]   lock(&(&nseq->lock)->rlock);
[   28.532322]
[   28.532322]  *** DEADLOCK ***
[   28.532322]
[   28.532322]  May be due to missing lock nesting notation
[   28.532322]
[   28.532322] 3 locks held by insmod/583:
[   28.532322]  #0:  (net_mutex){+.+.+.}, at: [<ffffffff8163e30f>] register_pernet_subsys+0x1f/0x50
[   28.532322]  #1:  (&(&tn->nametbl_lock)->rlock){+.....}, at: [<ffffffffa000e091>] tipc_nametbl_stop+0xb1/0x1f0 [tipc]
[   28.532322]  #2:  (&(&nseq->lock)->rlock){+.....}, at: [<ffffffffa000e0dc>] tipc_nametbl_stop+0xfc/0x1f0 [tipc]
[   28.532322]
[   28.532322] stack backtrace:
[   28.532322] CPU: 1 PID: 583 Comm: insmod Not tainted 3.19.0+ #194
[   28.532322] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2007
[   28.532322]  ffffffff82394460 ffff8800144cb928 ffffffff81792f3e 0000000000000007
[   28.532322]  ffffffff82394460 ffff8800144cba28 ffffffff810a8080 ffff8800144cb998
[   28.532322]  ffffffff810a4df3 ffff880013e9cb10 ffffffff82b0d330 ffff880013e9cb38
[   28.532322] Call Trace:
[   28.532322]  [<ffffffff81792f3e>] dump_stack+0x4c/0x65
[   28.532322]  [<ffffffff810a8080>] __lock_acquire+0x740/0x1ca0
[   28.532322]  [<ffffffff810a4df3>] ? __bfs+0x23/0x270
[   28.532322]  [<ffffffff810a7506>] ? check_irq_usage+0x96/0xe0
[   28.532322]  [<ffffffff810a8a73>] ? __lock_acquire+0x1133/0x1ca0
[   28.532322]  [<ffffffffa000d219>] ? tipc_nametbl_remove_publ+0x49/0x2e0 [tipc]
[   28.532322]  [<ffffffff810a9c0c>] lock_acquire+0x9c/0x140
[   28.532322]  [<ffffffffa000d219>] ? tipc_nametbl_remove_publ+0x49/0x2e0 [tipc]
[   28.532322]  [<ffffffff8179c41f>] _raw_spin_lock_bh+0x3f/0x50
[   28.532322]  [<ffffffffa000d219>] ? tipc_nametbl_remove_publ+0x49/0x2e0 [tipc]
[   28.532322]  [<ffffffffa000d219>] tipc_nametbl_remove_publ+0x49/0x2e0 [tipc]
[   28.532322]  [<ffffffffa000e11e>] tipc_nametbl_stop+0x13e/0x1f0 [tipc]
[   28.532322]  [<ffffffffa000dfe5>] ? tipc_nametbl_stop+0x5/0x1f0 [tipc]
[   28.532322]  [<ffffffffa0004bab>] tipc_init_net+0x13b/0x150 [tipc]
[   28.532322]  [<ffffffffa0004a75>] ? tipc_init_net+0x5/0x150 [tipc]
[   28.532322]  [<ffffffff8163dece>] ops_init+0x4e/0x150
[   28.532322]  [<ffffffff810aa66d>] ? trace_hardirqs_on+0xd/0x10
[   28.532322]  [<ffffffff8163e1d3>] register_pernet_operations+0xf3/0x190
[   28.532322]  [<ffffffff8163e31e>] register_pernet_subsys+0x2e/0x50
[   28.532322]  [<ffffffffa002406a>] tipc_init+0x6a/0x1000 [tipc]
[   28.532322]  [<ffffffffa0024000>] ? 0xffffffffa0024000
[   28.532322]  [<ffffffff810002d9>] do_one_initcall+0x89/0x1c0
[   28.532322]  [<ffffffff811b7cb0>] ? kmem_cache_alloc_trace+0x50/0x1b0
[   28.532322]  [<ffffffff810e725b>] ? do_init_module+0x2b/0x200
[   28.532322]  [<ffffffff810e7294>] do_init_module+0x64/0x200
[   28.532322]  [<ffffffff810e9353>] load_module+0x12f3/0x18e0
[   28.532322]  [<ffffffff810e5890>] ? show_initstate+0x50/0x50
[   28.532322]  [<ffffffff810e9a19>] SyS_init_module+0xd9/0x110
[   28.532322]  [<ffffffff8179f3b3>] sysenter_dispatch+0x7/0x1f

Before tipc_purge_publications() calls tipc_nametbl_remove_publ() to
remove a publication with a name sequence, the name sequence's lock
is held. However, when tipc_nametbl_remove_publ() calling
tipc_nameseq_remove_publ() to remove the publication, it first tries
to query name sequence instance with the publication, and then holds
the lock of the found name sequence. But as the lock may be already
taken in tipc_purge_publications(), deadlock happens like above
scenario demonstrated. As tipc_nameseq_remove_publ() doesn't grab name
sequence's lock, the deadlock can be avoided if it's directly invoked
by tipc_purge_publications().

Fixes: 97ede29e80ee ("tipc: convert name table read-write lock to RCU")
Signed-off-by: Ying Xue <ying.xue@windriver.com>
Reviewed-by: Erik Hugne <erik.hugne@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agotipc: fix netns refcnt leak
Ying Xue [Wed, 18 Mar 2015 01:32:57 +0000 (09:32 +0800)]
tipc: fix netns refcnt leak

When the TIPC module is loaded, we launch a topology server in kernel
space, which in its turn is creating TIPC sockets for communication
with topology server users. Because both the socket's creator and
provider reside in the same module, it is necessary that the TIPC
module's reference count remains zero after the server is started and
the socket created; otherwise it becomes impossible to perform "rmmod"
even on an idle module.

Currently, we achieve this by defining a separate "tipc_proto_kern"
protocol struct, that is used only for kernel space socket allocations.
This structure has the "owner" field set to NULL, which restricts the
module reference count from being be bumped when sk_alloc() for local
sockets is called. Furthermore, we have defined three kernel-specific
functions, tipc_sock_create_local(), tipc_sock_release_local() and
tipc_sock_accept_local(), to avoid the module counter being modified
when module local sockets are created or deleted. This has worked well
until we introduced name space support.

However, after name space support was introduced, we have observed that
a reference count leak occurs, because the netns counter is not
decremented in tipc_sock_delete_local().

This commit remedies this problem. But instead of just modifying
tipc_sock_delete_local(), we eliminate the whole parallel socket
handling infrastructure, and start using the regular sk_create_kern(),
kernel_accept() and sk_release_kernel() calls. Since those functions
manipulate the module counter, we must now compensate for that by
explicitly decrementing the counter after module local sockets are
created, and increment it just before calling sk_release_kernel().

Fixes: a62fbccecd62 ("tipc: make subscriber server support net namespace")
Signed-off-by: Ying Xue <ying.xue@windriver.com>
Reviewed-by: Jon Maloy <jon.maloy@ericson.com>
Reviewed-by: Erik Hugne <erik.hugne@ericsson.com>
Reported-by: Cong Wang <cwang@twopensource.com>
Tested-by: Erik Hugne <erik.hugne@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoMerge branch 'listener_refactor_part_12'
David S. Miller [Wed, 18 Mar 2015 02:02:53 +0000 (22:02 -0400)]
Merge branch 'listener_refactor_part_12'

Eric Dumazet says:

====================
inet: tcp listener refactoring, part 12

By adding a pointer back to listener, we are preparing synack rtx
handling to no longer be governed by listener keepalive timer,
as this is the most problematic source of contention on listener
spinlock. Note that TCP FastOpen had such pointer anyway, so we
make it generic.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoinet: fix request sock refcounting
Eric Dumazet [Wed, 18 Mar 2015 01:32:31 +0000 (18:32 -0700)]
inet: fix request sock refcounting

While testing last patch series, I found req sock refcounting was wrong.

We must set skc_refcnt to 1 for all request socks added in hashes,
but also on request sockets created by FastOpen or syncookies.

It is tricky because we need to defer this initialization so that
future RCU lookups do not try to take a refcount on a not yet
fully initialized request socket.

Also get rid of ireq_refcnt alias.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Fixes: 13854e5a6046 ("inet: add proper refcounting to request sock")
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoinet: avoid fastopen lock for regular accept()
Eric Dumazet [Wed, 18 Mar 2015 01:32:30 +0000 (18:32 -0700)]
inet: avoid fastopen lock for regular accept()

It is not because a TCP listener is FastOpen ready that
all incoming sockets actually used FastOpen.

Avoid taking queue->fastopenq->lock if not needed.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agotcp: rename struct tcp_request_sock listener
Eric Dumazet [Wed, 18 Mar 2015 01:32:29 +0000 (18:32 -0700)]
tcp: rename struct tcp_request_sock listener

The listener field in struct tcp_request_sock is a pointer
back to the listener. We now have req->rsk_listener, so TCP
only needs one boolean and not a full pointer.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoinet: add rsk_listener field to struct request_sock
Eric Dumazet [Wed, 18 Mar 2015 01:32:28 +0000 (18:32 -0700)]
inet: add rsk_listener field to struct request_sock

Once we'll be able to lookup request sockets in ehash table,
we'll need to get access to listener which created this request.

This avoid doing a lookup to find the listener, which benefits
for a more solid SO_REUSEPORT, and is needed once we no
longer queue request sock into a listener private queue.

Note that 'struct tcp_request_sock'->listener could be reduced
to a single bit, as TFO listener should match req->rsk_listener.
TFO will no longer need to hold a reference on the listener.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoinet: uninline inet_reqsk_alloc()
Eric Dumazet [Wed, 18 Mar 2015 01:32:27 +0000 (18:32 -0700)]
inet: uninline inet_reqsk_alloc()

inet_reqsk_alloc() is becoming fat and should not be inlined.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoinet: add sk_listener argument to inet_reqsk_alloc()
Eric Dumazet [Wed, 18 Mar 2015 01:32:26 +0000 (18:32 -0700)]
inet: add sk_listener argument to inet_reqsk_alloc()

listener socket can be used to set net pointer, and will
be later used to hold a reference on listener.

Add a const qualifier to first argument (struct request_sock_ops *),
and factorize all write_pnet(&ireq->ireq_net, sock_net(sk));

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoMerge branch 'listener_refactor_part_11'
David S. Miller [Tue, 17 Mar 2015 19:18:12 +0000 (15:18 -0400)]
Merge branch 'listener_refactor_part_11'

Eric Dumazet says:

====================
inet: tcp listener refactoring, part 11

Before inserting request sockets into general (ehash) table,
we need to prepare netfilter to cope with them, as they are
not full sockets.

I'll later change xt_socket to get full support, including for
request sockets (NEW_SYN_RECV)

Save 8 bytes in inet_request_sock on 64bit arches. We'll soon add
a pointer to the listener socket.

I included two TCP changes in this patch series.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agotcp: uninline tcp_oow_rate_limited()
Eric Dumazet [Tue, 17 Mar 2015 04:06:20 +0000 (21:06 -0700)]
tcp: uninline tcp_oow_rate_limited()

tcp_oow_rate_limited() is hardly used in fast path, there is
no point inlining it.

Signed-of-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agotcp: move tcp_openreq_init() to tcp_input.c
Eric Dumazet [Tue, 17 Mar 2015 04:06:19 +0000 (21:06 -0700)]
tcp: move tcp_openreq_init() to tcp_input.c

This big helper is called once from tcp_conn_request(), there is no
point having it in an include. Compiler will inline it anyway.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoinet: move ir_mark to fill a hole
Eric Dumazet [Tue, 17 Mar 2015 04:06:18 +0000 (21:06 -0700)]
inet: move ir_mark to fill a hole

On 64bit arches, we can save 8 bytes in inet_request_sock
by moving ir_mark to fill a hole.

While we are at it, inet_request_mark() can get a const qualifier
for listener socket.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agonetfilter: xt_socket: prepare for TCP_NEW_SYN_RECV support
Eric Dumazet [Tue, 17 Mar 2015 04:06:17 +0000 (21:06 -0700)]
netfilter: xt_socket: prepare for TCP_NEW_SYN_RECV support

TCP request socks soon will be visible in ehash table.

xt_socket will be able to match them, but first we need
to make sure to not consider them as full sockets.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agonetfilter: tproxy: prepare TCP_NEW_SYN_RECV support
Eric Dumazet [Tue, 17 Mar 2015 04:06:16 +0000 (21:06 -0700)]
netfilter: tproxy: prepare TCP_NEW_SYN_RECV support

TCP request socks soon will be visible in ehash table.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agonetfilter: use sk_fullsock() helper
Eric Dumazet [Tue, 17 Mar 2015 04:06:15 +0000 (21:06 -0700)]
netfilter: use sk_fullsock() helper

Upcoming request sockets have TCP_NEW_SYN_RECV state and should
be special cased a bit like TCP_TIME_WAIT sockets.

Signed-off-by; Eric Dumazet <edumazet@google.com>

Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agobpf: allow BPF programs access 'protocol' and 'vlan_tci' fields
Alexei Starovoitov [Tue, 17 Mar 2015 01:06:02 +0000 (18:06 -0700)]
bpf: allow BPF programs access 'protocol' and 'vlan_tci' fields

as a follow on to patch 70006af95515 ("bpf: allow eBPF access skb fields")
this patch allows 'protocol' and 'vlan_tci' fields to be accessible
from extended BPF programs.

The usage of 'protocol', 'vlan_present' and 'vlan_tci' fields is the same as
corresponding SKF_AD_PROTOCOL, SKF_AD_VLAN_TAG_PRESENT and SKF_AD_VLAN_TAG
accesses in classic BPF.

Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoMerge branch 'const_of_device_id'
David S. Miller [Tue, 17 Mar 2015 19:00:28 +0000 (15:00 -0400)]
Merge branch 'const_of_device_id'

Fabian Frederick says:

====================
drivers/net: constify of_device_id array

This small patchset adds const to of_device_id arrays in
drivers/net branch.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agovia-velocity: constify of_device_id array
Fabian Frederick [Tue, 17 Mar 2015 18:40:28 +0000 (19:40 +0100)]
via-velocity: constify of_device_id array

of_device_id is always used as const.
(See driver.of_match_table and open firmware functions)

Signed-off-by: Fabian Frederick <fabf@skynet.be>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agonet: via-rhine: constify of_device_id array
Fabian Frederick [Tue, 17 Mar 2015 18:40:27 +0000 (19:40 +0100)]
net: via-rhine: constify of_device_id array

of_device_id is always used as const.
(See driver.of_match_table and open firmware functions)

Signed-off-by: Fabian Frederick <fabf@skynet.be>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoehea: constify of_device_id array
Fabian Frederick [Tue, 17 Mar 2015 18:40:26 +0000 (19:40 +0100)]
ehea: constify of_device_id array

of_device_id is always used as const.
(See driver.of_match_table and open firmware functions)

Signed-off-by: Fabian Frederick <fabf@skynet.be>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoIBM-EMAC: constify of_device_id array
Fabian Frederick [Tue, 17 Mar 2015 18:40:25 +0000 (19:40 +0100)]
IBM-EMAC: constify of_device_id array

of_device_id is always used as const.
(See driver.of_match_table and open firmware functions)

Signed-off-by: Fabian Frederick <fabf@skynet.be>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agocan: constify of_device_id array
Fabian Frederick [Tue, 17 Mar 2015 18:40:24 +0000 (19:40 +0100)]
can: constify of_device_id array

of_device_id is always used as const.
(See driver.of_match_table and open firmware functions)

Signed-off-by: Fabian Frederick <fabf@skynet.be>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agonet: phy: constify of_device_id array
Fabian Frederick [Tue, 17 Mar 2015 18:40:23 +0000 (19:40 +0100)]
net: phy: constify of_device_id array

of_device_id is always used as const.
(See driver.of_match_table and open firmware functions)

Signed-off-by: Fabian Frederick <fabf@skynet.be>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoorinoco: constify of_device_id array
Fabian Frederick [Tue, 17 Mar 2015 18:40:22 +0000 (19:40 +0100)]
orinoco: constify of_device_id array

of_device_id is always used as const.
(See driver.of_match_table and open firmware functions)

Signed-off-by: Fabian Frederick <fabf@skynet.be>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agonet: xilinx: constify of_device_id array
Fabian Frederick [Tue, 17 Mar 2015 18:37:40 +0000 (19:37 +0100)]
net: xilinx: constify of_device_id array

of_device_id is always used as const.
(See driver.of_match_table and open firmware functions)

Signed-off-by: Fabian Frederick <fabf@skynet.be>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agonet: greth: constify of_device_id array
Fabian Frederick [Tue, 17 Mar 2015 18:37:39 +0000 (19:37 +0100)]
net: greth: constify of_device_id array

of_device_id is always used as const.
(See driver.of_match_table and open firmware functions)

Signed-off-by: Fabian Frederick <fabf@skynet.be>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agonetdev: octeon_mgmt: constify of_device_id array
Fabian Frederick [Tue, 17 Mar 2015 18:37:38 +0000 (19:37 +0100)]
netdev: octeon_mgmt: constify of_device_id array

of_device_id is always used as const.
(See driver.of_match_table and open firmware functions)

Signed-off-by: Fabian Frederick <fabf@skynet.be>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agonet: ethernet: apple: constify of_device_id array
Fabian Frederick [Tue, 17 Mar 2015 18:37:37 +0000 (19:37 +0100)]
net: ethernet: apple: constify of_device_id array

of_device_id is always used as const.
(See driver.of_match_table and open firmware functions)

Signed-off-by: Fabian Frederick <fabf@skynet.be>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agodrivers: net: xgene: constify of_device_id array
Fabian Frederick [Tue, 17 Mar 2015 18:37:36 +0000 (19:37 +0100)]
drivers: net: xgene: constify of_device_id array

of_device_id is always used as const.
(See driver.of_match_table and open firmware functions)

Signed-off-by: Fabian Frederick <fabf@skynet.be>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agonet: ethoc: constify of_device_id array
Fabian Frederick [Tue, 17 Mar 2015 18:37:35 +0000 (19:37 +0100)]
net: ethoc: constify of_device_id array

of_device_id is always used as const.
(See driver.of_match_table and open firmware functions)

Signed-off-by: Fabian Frederick <fabf@skynet.be>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agonet/fsl: constify of_device_id array
Fabian Frederick [Tue, 17 Mar 2015 18:37:34 +0000 (19:37 +0100)]
net/fsl: constify of_device_id array

of_device_id is always used as const.
(See driver.of_match_table and open firmware functions)

Signed-off-by: Fabian Frederick <fabf@skynet.be>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoAltera TSE: constify of_device_id array
Fabian Frederick [Tue, 17 Mar 2015 18:37:33 +0000 (19:37 +0100)]
Altera TSE: constify of_device_id array

of_device_id is always used as const.
(See driver.of_match_table and open firmware functions)

Signed-off-by: Fabian Frederick <fabf@skynet.be>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agonet: netcp: constify of_device_id array
Fabian Frederick [Tue, 17 Mar 2015 18:37:32 +0000 (19:37 +0100)]
net: netcp: constify of_device_id array

of_device_id is always used as const.
(See driver.of_match_table and open firmware functions)

Signed-off-by: Fabian Frederick <fabf@skynet.be>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agorhashtable: Avoid calculating hash again to unlock
Thomas Graf [Mon, 16 Mar 2015 09:42:26 +0000 (10:42 +0100)]
rhashtable: Avoid calculating hash again to unlock

Caching the lock pointer avoids having to hash on the object
again to unlock the bucket locks.

Signed-off-by: Thomas Graf <tgraf@suug.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agotcp_metrics: fix wrong lockdep annotations
Eric Dumazet [Mon, 16 Mar 2015 14:14:34 +0000 (07:14 -0700)]
tcp_metrics: fix wrong lockdep annotations

Changes in tcp_metric hash table are protected by tcp_metrics_lock
only, not by genl_mutex

While we are at it use deref_locked() instead of rcu_dereference()
in tcp_new() to avoid unnecessary barrier, as we hold tcp_metrics_lock
as well.

Reported-by: Andrew Vagin <avagin@parallels.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Fixes: 098a697b497e ("tcp_metrics: Use a single hash table for all network namespaces.")
Reviewed-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agodsa: change "select" to "depends on" for NET_SWITCHDEV and for NET_DSA
Jiri Pirko [Mon, 16 Mar 2015 11:33:32 +0000 (12:33 +0100)]
dsa: change "select" to "depends on" for NET_SWITCHDEV and for NET_DSA

This would fix randconfig compile error:
net/built-in.o: In function `netdev_switch_fib_ipv4_abort':
(.text+0xf7811): undefined reference to `fib_flush_external'

Also it fixes following warnings:
warning: (NET_DSA) selects NET_SWITCHDEV which has unmet direct dependencies (NET && INET)

warning: (NET_DSA_MV88E6060 && NET_DSA_MV88E6131 && NET_DSA_MV88E6123_61_65 && NET_DSA_MV88E6171 && NET_DSA_MV88E6352 && NET_DSA_BCM_SF2) selects NET_DSA which has unmet direct dependencies (NET && HAVE_NET_DSA && NET_SWITCHDEV)

Reported-by: Randy Dunlap <rdunlap@infradead.org>
Suggested-by: Alexei Starovoitov <alexei.starovoitov@gmail.com>
Signed-off-by: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agonet/fsl: modify xgmac_mdio for little endian SoCs
Shaohui Xie [Mon, 16 Mar 2015 10:56:29 +0000 (18:56 +0800)]
net/fsl: modify xgmac_mdio for little endian SoCs

MDIO controller on little endian Socs, e.g. ls2085a is similar to the
controller on big endian Socs, but the MDIO access is little endian,
we use I/O accessor function to handle endianness, so the driver can
run on little endian Socs. A property "little-endian" is used
in DTS to indicate the MDIO is little endian, if driver probes the
property, driver will access MDIO in little endian, otherwise, driver
works in big endian by default.

Signed-off-by: Shaohui Xie <Shaohui.Xie@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agonet/fsl: fix a bug in xgmac_mdio
Shaohui Xie [Mon, 16 Mar 2015 10:55:50 +0000 (18:55 +0800)]
net/fsl: fix a bug in xgmac_mdio

There is a bug in xgmac_wait_until_done() which mdio_stat should be used
instead of mdio_data when checking if busy bit is cleared.

Signed-off-by: Shaohui Xie <Shaohui.Xie@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agonet: kernel socket should be released in init_net namespace
Ying Xue [Mon, 16 Mar 2015 10:19:12 +0000 (18:19 +0800)]
net: kernel socket should be released in init_net namespace

Creating a kernel socket with sock_create_kern() happens in "init_net"
namespace, however, releasing it with sk_release_kernel() occurs in
the current namespace which may be different with "init_net" namespace.
Therefore, we should guarantee that the namespace in which a kernel
socket is created is same as the socket is created.

Signed-off-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agorhashtable: Annotate RCU locking of walkers
Thomas Graf [Mon, 16 Mar 2015 09:42:27 +0000 (10:42 +0100)]
rhashtable: Annotate RCU locking of walkers

Fixes the following sparse warnings:

lib/rhashtable.c:767:5: warning: context imbalance in 'rhashtable_walk_start' - wrong count at exit
lib/rhashtable.c:849:6: warning: context imbalance in 'rhashtable_walk_stop' - unexpected unlock

Fixes: f2dba9c6ff0d ("rhashtable: Introduce rhashtable_walk_*")
Signed-off-by: Thomas Graf <tgraf@suug.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agorocker: replace fixed stack allocation with dynamic allocation
Scott Feldman [Mon, 16 Mar 2015 06:04:46 +0000 (23:04 -0700)]
rocker: replace fixed stack allocation with dynamic allocation

In hast to fix some sparse warning, I hard-coded a fix-sized array on the stack
which is probably too big for kernel standards.  Fix this by converting array
to dynamic allocation.

Signed-off-by: Scott Feldman <sfeldma@gmail.com>
Acked-by: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoMerge branch 'listener_refactor'
David S. Miller [Mon, 16 Mar 2015 19:55:47 +0000 (15:55 -0400)]
Merge branch 'listener_refactor'

Eric Dumazet says:

====================
inet: tcp listener refactoring, part 10

We are getting close to the point where request sockets will be hashed
into generic hash table. Some followups are needed for netfilter and
will be handled in next patch series.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoinet: add proper refcounting to request sock
Eric Dumazet [Mon, 16 Mar 2015 04:12:16 +0000 (21:12 -0700)]
inet: add proper refcounting to request sock

reqsk_put() is the generic function that should be used
to release a refcount (and automatically call reqsk_free())

reqsk_free() might be called if refcount is known to be 0
or undefined.

refcnt is set to one in inet_csk_reqsk_queue_add()

As request socks are not yet in global ehash table,
I added temporary debugging checks in reqsk_put() and reqsk_free()

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoinet: factorize sock_edemux()/sock_gen_put() code
Eric Dumazet [Mon, 16 Mar 2015 04:12:15 +0000 (21:12 -0700)]
inet: factorize sock_edemux()/sock_gen_put() code

sock_edemux() is not used in fast path, and should
really call sock_gen_put() to save some code.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoinet_diag: allow sk_diag_fill() to handle request socks
Eric Dumazet [Mon, 16 Mar 2015 04:12:14 +0000 (21:12 -0700)]
inet_diag: allow sk_diag_fill() to handle request socks

inet_diag_fill_req() is renamed to inet_req_diag_fill()
and moved up, so that it can be called fom sk_diag_fill()

inet_diag_bc_sk() is ready to handle request socks.

inet_twsk_diag_dump() is no longer needed.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoinet: ip early demux should avoid request sockets
Eric Dumazet [Mon, 16 Mar 2015 04:12:13 +0000 (21:12 -0700)]
inet: ip early demux should avoid request sockets

When a request socket is created, we do not cache ip route
dst entry, like for timewait sockets.

Let's use sk_fullsock() helper.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agonet: add sk_fullsock() helper
Eric Dumazet [Mon, 16 Mar 2015 04:12:12 +0000 (21:12 -0700)]
net: add sk_fullsock() helper

We have many places where we want to check if a socket is
not a timewait or request socket. Use a helper to avoid
hard coding this.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoMerge branch 'swdev_ops'
David S. Miller [Mon, 16 Mar 2015 04:14:47 +0000 (00:14 -0400)]
Merge branch 'swdev_ops'

Scott Feldman says:

====================
switchdev: add swdev ops

v3:

 - Fix missing include for DSA build

v2:

 - Per Simon's review, squash some of the dependent commits into one to
   make series git bisect safe.

v1:

Per discussions at netconf, move switchdev ndo ops to a new swdev_ops to
keep ndo namespace clean and maintain switchdev-related ops into one place.

There are no functional changes here; just shuffling ops around for better
organization.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agonetdev: remove ndo ops for switchdev
Scott Feldman [Mon, 16 Mar 2015 04:07:16 +0000 (21:07 -0700)]
netdev: remove ndo ops for switchdev

Signed-off-by: Scott Feldman <sfeldma@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoswitchdev: use new swdev ops
Scott Feldman [Mon, 16 Mar 2015 04:07:15 +0000 (21:07 -0700)]
switchdev: use new swdev ops

Move swdev wrappers over to new swdev ops (from previous ndo ops).  No
functional changes to the implementation.

Signed-off-by: Scott Feldman <sfeldma@gmail.com>
rocker: move to new swdev ops

Signed-off-by: Scott Feldman <sfeldma@gmail.com>
dsa: move to new swdev ops

Signed-off-by: Scott Feldman <sfeldma@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoswitchdev: add swdev ops
Scott Feldman [Mon, 16 Mar 2015 04:07:14 +0000 (21:07 -0700)]
switchdev: add swdev ops

As discussed at netconf, introduce swdev_ops as first step to move switchdev
ops from ndo to swdev.  This will keep switchdev from cluttering up ndo ops
space.

Signed-off-by: Scott Feldman <sfeldma@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoMerge branch 'rhashtable-fixes-next'
David S. Miller [Mon, 16 Mar 2015 02:22:17 +0000 (22:22 -0400)]
Merge branch 'rhashtable-fixes-next'

Herbert Xu says:

====================
rhashtable: Fix two bugs caused by multiple rehash preparation

While testing some new patches over the weekend I discovered a
couple of bugs in the series that had just been merged.  These
two patches fix them:

1) A use-after-free in the walker that can cause crashes when
walking during a rehash.

2) When a second rehash starts during a single rhashtable_remove
call the remove may fail when it shouldn't.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agorhashtable: Fix rhashtable_remove failures
Herbert Xu [Sun, 15 Mar 2015 10:12:05 +0000 (21:12 +1100)]
rhashtable: Fix rhashtable_remove failures

The commit 9d901bc05153bbf33b5da2cd6266865e531f0545 ("rhashtable:
Free bucket tables asynchronously after rehash") causes gratuitous
failures in rhashtable_remove.

The reason is that it inadvertently introduced multiple rehashing
from the perspective of readers.  IOW it is now possible to see
more than two tables during a single RCU critical section.

Fortunately the other reader rhashtable_lookup already deals with
this correctly thanks to c4db8848af6af92f90462258603be844baeab44d
("rhashtable: rhashtable: Move future_tbl into struct bucket_table")
so only rhashtable_remove is broken by this change.

This patch fixes this by looping over every table from the first
one to the last or until we find the element that we were trying
to delete.

Incidentally the simple test for detecting rehashing to prevent
starting another shrinking no longer works.  Since it isn't needed
anyway (the work queue and the mutex serves as a natural barrier
to unnecessary rehashes) I've simply killed the test.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agorhashtable: Fix use-after-free in rhashtable_walk_stop
Herbert Xu [Sun, 15 Mar 2015 10:12:04 +0000 (21:12 +1100)]
rhashtable: Fix use-after-free in rhashtable_walk_stop

The commit c4db8848af6af92f90462258603be844baeab44d ("rhashtable:
Move future_tbl into struct bucket_table") introduced a use-after-
free bug in rhashtable_walk_stop because it dereferences tbl after
droping the RCU read lock.

This patch fixes it by moving the RCU read unlock down to the bottom
of rhashtable_walk_stop.  In fact this was how I had it originally
but it got dropped while rearranging patches because this one
depended on the async freeing of bucket_table.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agonet: bcmgenet: add support for Hardware Filter Block
Petri Gynther [Fri, 13 Mar 2015 21:45:00 +0000 (14:45 -0700)]
net: bcmgenet: add support for Hardware Filter Block

Add support for Hardware Filter Block (HFB) so that incoming Rx traffic
can be matched and directed to desired Rx queues.

Signed-off-by: Petri Gynther <pgynther@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoMerge branch 'ebpf_skb_fields'
David S. Miller [Mon, 16 Mar 2015 02:02:35 +0000 (22:02 -0400)]
Merge branch 'ebpf_skb_fields'

Alexei Starovoitov says:

====================
bpf: allow eBPF access skb fields

V1->V2:
- refactored field access converter into common helper convert_skb_access()
  used in both classic and extended BPF
- added missing build_bug_on for field 'len'
- added comment to uapi/linux/bpf.h as suggested by Daniel
- dropped exposing 'ifindex' field for now

classic BPF has a way to access skb fields, whereas extended BPF didn't.
This patch introduces this ability.

Classic BPF can access fields via negative SKF_AD_OFF offset.
Positive bpf_ld_abs N is treated as load from packet, whereas
bpf_ld_abs -0x1000 + N is treated as skb fields access.
Many offsets were hard coded over years: SKF_AD_PROTOCOL, SKF_AD_PKTTYPE, etc.
The problem with this approach was that for every new field classic bpf
assembler had to be tweaked.

I've considered doing the same for extended, but for every new field LLVM
compiler would have to be modifed. Since it would need to add a new intrinsic.
It could be done with single intrinsic and magic offset or use of inline
assembler, but neither are clean from compiler backend point of view, since
they look like calls but shouldn't scratch caller-saved registers.

Another approach was to introduce a new helper functions like bpf_get_pkt_type()
for every field that we want to access, but that is equally ugly for kernel
and slow, since helpers are calls and they are slower then just loads.
In theory helper calls can be 'inlined' inside kernel into direct loads, but
since they were calls for user space, compiler would have to spill registers
around such calls anyway. Teaching compiler to treat such helpers differently
is even uglier.

They were few other ideas considered. At the end the best seems to be to
introduce a user accessible mirror of in-kernel sk_buff structure:

struct __sk_buff {
    __u32 len;
    __u32 pkt_type;
    __u32 mark;
    __u32 queue_mapping;
};

bpf programs will do:

int bpf_prog1(struct __sk_buff *skb)
{
    __u32 var = skb->pkt_type;

which will be compiled to bpf assembler as:

dst_reg = *(u32 *)(src_reg + 4) // 4 == offsetof(struct __sk_buff, pkt_type)

bpf verifier will check validity of access and will convert it to:

dst_reg = *(u8 *)(src_reg + offsetof(struct sk_buff, __pkt_type_offset))
dst_reg &= 7

since 'pkt_type' is a bitfield.

No new instructions added. LLVM doesn't need to be modified.
JITs don't change and verifier already knows when it accesses 'ctx' pointer.
The only thing needed was to convert user visible offset within __sk_buff
to kernel internal offset within sk_buff.
For 'len' and other fields conversion is trivial.
Converting 'pkt_type' takes 2 or 3 instructions depending on endianness.
More fields can be exposed by adding to the end of the 'struct __sk_buff'.
Like vlan_tci and others can be added later.

When pkt_type field is moved around, goes into different structure, removed or
its size changes, the function convert_skb_access() would need to updated and
it will cover both classic and extended.

Patch 2 updates examples to demonstrates how fields are accessed and
adds new tests for verifier, since it needs to detect a corner case when
attacker is using single bpf instruction in two branches with different
register types.

The 4 fields of __sk_buff are already exposed to user space via classic bpf and
I believe they're useful in extended as well.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agosamples: bpf: add skb->field examples and tests
Alexei Starovoitov [Fri, 13 Mar 2015 18:57:43 +0000 (11:57 -0700)]
samples: bpf: add skb->field examples and tests

- modify sockex1 example to count number of bytes in outgoing packets
- modify sockex2 example to count number of bytes and packets per flow
- add 4 stress tests that exercise 'skb->field' code path of verifier

Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agobpf: allow extended BPF programs access skb fields
Alexei Starovoitov [Fri, 13 Mar 2015 18:57:42 +0000 (11:57 -0700)]
bpf: allow extended BPF programs access skb fields

introduce user accessible mirror of in-kernel 'struct sk_buff':
struct __sk_buff {
    __u32 len;
    __u32 pkt_type;
    __u32 mark;
    __u32 queue_mapping;
};

bpf programs can do:

int bpf_prog(struct __sk_buff *skb)
{
    __u32 var = skb->pkt_type;

which will be compiled to bpf assembler as:

dst_reg = *(u32 *)(src_reg + 4) // 4 == offsetof(struct __sk_buff, pkt_type)

bpf verifier will check validity of access and will convert it to:

dst_reg = *(u8 *)(src_reg + offsetof(struct sk_buff, __pkt_type_offset))
dst_reg &= 7

since skb->pkt_type is a bitfield.

Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoMerge branch 'ebpf_helpers'
David S. Miller [Mon, 16 Mar 2015 01:57:30 +0000 (21:57 -0400)]
Merge branch 'ebpf_helpers'

Daniel Borkmann says:

====================
eBPF updates

Two small eBPF helper additions to better match up with ancillary
classic BPF functionality.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoebpf: add helper for obtaining current processor id
Daniel Borkmann [Sat, 14 Mar 2015 01:27:17 +0000 (02:27 +0100)]
ebpf: add helper for obtaining current processor id

This patch adds the possibility to obtain raw_smp_processor_id() in
eBPF. Currently, this is only possible in classic BPF where commit
da2033c28226 ("filter: add SKF_AD_RXHASH and SKF_AD_CPU") has added
facilities for this.

Perhaps most importantly, this would also allow us to track per CPU
statistics with eBPF maps, or to implement a poor-man's per CPU data
structure through eBPF maps.

Example function proto-type looks like:

  u32 (*smp_processor_id)(void) = (void *)BPF_FUNC_get_smp_processor_id;

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoebpf: add prandom helper for packet sampling
Daniel Borkmann [Sat, 14 Mar 2015 01:27:16 +0000 (02:27 +0100)]
ebpf: add prandom helper for packet sampling

This work is similar to commit 4cd3675ebf74 ("filter: added BPF
random opcode") and adds a possibility for packet sampling in eBPF.

Currently, this is only possible in classic BPF and useful to
combine sampling with f.e. packet sockets, possible also with tc.

Example function proto-type looks like:

  u32 (*prandom_u32)(void) = (void *)BPF_FUNC_get_prandom_u32;

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoMerge branch 'gianfar-next'
David S. Miller [Sun, 15 Mar 2015 23:56:52 +0000 (19:56 -0400)]
Merge branch 'gianfar-next'

Claudiu Manoil says:

====================
gianfar: ARM port driver updates (2/2)

The 2nd round of driver updates to make gianfar portable on ARM,
for the ARM based SoC that integrates eTSEC - "ls1021a".
The patches address the bulk of remaining endianess issues -
handling DMA fields (BD and FCB), and device tree properties.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agogianfar: Consider dts property endianess on handling
Jingchang Lu [Fri, 13 Mar 2015 08:52:32 +0000 (10:52 +0200)]
gianfar: Consider dts property endianess on handling

Use of_property_read*() to get arch endian consistent
property values. Do some refactoring in the process.

Signed-off-by: Jingchang Lu <jingchang.lu@freescale.com>
Signed-off-by: Claudiu Manoil <claudiu.manoil@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agogianfar: Make FCB access endian safe
Claudiu Manoil [Fri, 13 Mar 2015 08:36:29 +0000 (10:36 +0200)]
gianfar: Make FCB access endian safe

Use conversion macros to correctly access the BE
fields of the Rx and Tx Frame Control Block on LE CPUs.

Signed-off-by: Claudiu Manoil <claudiu.manoil@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agogianfar: Make BDs access endian safe
Claudiu Manoil [Fri, 13 Mar 2015 08:36:28 +0000 (10:36 +0200)]
gianfar: Make BDs access endian safe

Use conversion macros to correctly access the BE
fields of the Rx and Tx Buffer Descriptors on LE CPUs.

Signed-off-by: Claudiu Manoil <claudiu.manoil@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoMerge branch 'rhashtable-next'
David S. Miller [Sun, 15 Mar 2015 05:35:46 +0000 (01:35 -0400)]
Merge branch 'rhashtable-next'

Herbert Xu says:

====================
rhashtable: Fixes + cleanups + preparation for multiple rehash

Patch 1 fixes the walker so that it behaves properly even during
a resize.

Patch 2-3 are cleanups.

Patch 4-6 lays some ground work for the upcoming multiple rehashing.

This revision fixes the warning coming from the bucket_table->size
downsize and improves its changelog.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agorhashtable: Move future_tbl into struct bucket_table
Herbert Xu [Sat, 14 Mar 2015 02:57:25 +0000 (13:57 +1100)]
rhashtable: Move future_tbl into struct bucket_table

This patch moves future_tbl to open up the possibility of having
multiple rehashes on the same table.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agorhashtable: Add rehash counter to bucket_table
Herbert Xu [Sat, 14 Mar 2015 02:57:24 +0000 (13:57 +1100)]
rhashtable: Add rehash counter to bucket_table

This patch adds a rehash counter to bucket_table to indicate
the last bucket that has been rehashed.  This serves two purposes:

1. Any bucket that has been rehashed can never gain a new object.
2. If the rehash counter reaches the size of the table, the table
will forever remain empty.

This patch also downsizes bucket_table->size to an unsigned int
since we do not support sizes greater than 32 bits yet.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agorhashtable: Free bucket tables asynchronously after rehash
Herbert Xu [Sat, 14 Mar 2015 02:57:23 +0000 (13:57 +1100)]
rhashtable: Free bucket tables asynchronously after rehash

There is in fact no need to wait for an RCU grace period in the
rehash function, since all insertions are guaranteed to go into
the new table through spin locks.

This patch uses call_rcu to free the old/rehashed table at our
leisure.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agorhashtable: Move seed init into bucket_table_alloc
Herbert Xu [Sat, 14 Mar 2015 02:57:22 +0000 (13:57 +1100)]
rhashtable: Move seed init into bucket_table_alloc

It seems that I have already made every rehash redo the random
seed even though my commit message indicated otherwise :)

Since we have already taken that step, this patch goes one step
further and moves the seed initialisation into bucket_table_alloc.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agorhashtable: Use SINGLE_DEPTH_NESTING
Herbert Xu [Sat, 14 Mar 2015 02:57:21 +0000 (13:57 +1100)]
rhashtable: Use SINGLE_DEPTH_NESTING

We only nest one level deep there is no need to roll our own
subclasses.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agorhashtable: Fix walker behaviour during rehash
Herbert Xu [Sat, 14 Mar 2015 02:57:20 +0000 (13:57 +1100)]
rhashtable: Fix walker behaviour during rehash

Previously whenever the walker encountered a resize it simply
snaps back to the beginning and starts again.  However, this only
works if the rehash started and completed while the walker was
idle.

If the walker attempts to restart while the rehash is still ongoing,
we may miss objects that we shouldn't have.

This patch fixes this by making the walker walk the old table
followed by the new table just like all other readers.  If a
rehash is detected we will still signal our caller of the fact
so they can prepare for duplicates but we will simply continue
the walk onto the new table after the old one is finished either
by us or by the rehasher.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agonet: dsa: do not use slave MII bus for fixed PHYs
Florian Fainelli [Sat, 14 Mar 2015 20:21:59 +0000 (13:21 -0700)]
net: dsa: do not use slave MII bus for fixed PHYs

Commit cd28a1a9baee7 ("net: dsa: fully divert PHY reads/writes if
requested") introduced a check for particular PHYs that need to be
accessed using the slave MII bus created by DSA, but this check was too
inclusive. This would prevent fixed PHYs from being successfully
registered because those should not go through the slave MII bus created
by DSA.

Make sure we check that the PHY is not a fixed PHY to prevent that from
happening.

Fixes: cd28a1a9baee7 ("net: dsa: fully divert PHY reads/writes if requested")
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoMerge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next...
David S. Miller [Sat, 14 Mar 2015 19:08:02 +0000 (15:08 -0400)]
Merge branch 'master' of git://git./linux/kernel/git/jkirsher/next-queue

Jeff Kirsher says:

====================
Intel Wired LAN Driver Updates 2015-03-13

This series contains updates to ixgbe and ixgbevf.

Don adds additional support for X550 MAC types, which require additional
steps around enabling and disabling Rx.  Also cleans up variable type
inconsistency.

I provide a patch to allow relaxed ordering to be enabled on SPARC
architectures.  Also cleans up ixgbevf whitespace and code comments to
align the driver with networking coding standard.  Lastly cleaned up
uses of memcpy() where ether_addr_copy() could have been used.

Alex removes some dead code in the ixgbe cleanup patch.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoMerge branch 'listener_refactor_part_9'
David S. Miller [Sat, 14 Mar 2015 19:05:15 +0000 (15:05 -0400)]
Merge branch 'listener_refactor_part_9'

Eric Dumazet says:

====================
inet: tcp listener refactoring, part 9

This preliminary work pushes socket convergence a bit more:

1) request sock ir_iif is universally set

2) inet_diag can use common helpers to reduce LOC
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoinet_diag: factorize code in new inet_diag_msg_common_fill() helper
Eric Dumazet [Fri, 13 Mar 2015 22:51:12 +0000 (15:51 -0700)]
inet_diag: factorize code in new inet_diag_msg_common_fill() helper

Now the three type of sockets share a common base, we can factorize
code in inet_diag_msg_common_fill().

inet_diag_entry no longer requires saddr_storage & daddr_storage
and the extra copies.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoinet_diag: adjust inet_sk_diag_fill() bug condition
Eric Dumazet [Fri, 13 Mar 2015 22:51:11 +0000 (15:51 -0700)]
inet_diag: adjust inet_sk_diag_fill() bug condition

inet_sk_diag_fill() only copes with non timewait and non request socks

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoinet: fill request sock ir_iif for IPv4
Eric Dumazet [Fri, 13 Mar 2015 22:51:10 +0000 (15:51 -0700)]
inet: fill request sock ir_iif for IPv4

Once request socks will be in ehash table, they will need to have
a valid ir_iff field.

This is currently true only for IPv6. This patch extends support
for IPv4 as well.

This means inet_diag_fill_req() can now properly use ir_iif,
which is better for IPv6 link locals anyway, as request sockets
and established sockets will propagate consistent netlink idiag_if.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agoMerge branch 'tipc-next'
David S. Miller [Sat, 14 Mar 2015 18:38:41 +0000 (14:38 -0400)]
Merge branch 'tipc-next'

Jon Maloy says:

====================
tipc: some optimizations and impovements

The commits in this series contain some relatively simple changes that
lead to better throughput across TIPC connections. We also make changes
to the implementation of link transmission queueing and priority
handling, in order to make the code more comprehensible and maintainable.

v2: Commit #2: Redesigned tipc_msg_validate() to use pskb_may_pull(),
               as per feedback from David Miller.
    Commit #3: Some cosmetic changes to tipc_msg_extract(). I tried to
               replace the unconditional skb_linearize() with calls to
               pskb_may_pull() at selected locations, but I gave up.
               First, skb_trim() requires a fully linearized buffer.
               Second, it doesn't make much sense; the whole buffer
               will end up linearized, one way or another.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
9 years agotipc: clean up handling of message priorities
Jon Paul Maloy [Fri, 13 Mar 2015 20:08:11 +0000 (16:08 -0400)]
tipc: clean up handling of message priorities

Messages transferred by TIPC are assigned an "importance priority", -an
integer value indicating how to treat the message when there is link or
destination socket congestion.

There is no separate header field for this value. Instead, the message
user values have been chosen in ascending order according to perceived
importance, so that the message user field can be used for this.

This is not a good solution. First, we have many more users than the
needed priority levels, so we end up with treating more priority
levels than necessary. Second, the user field cannot always
accurately reflect the priority of the message. E.g., a message
fragment packet should really have the priority of the enveloped
user data message, and not the priority of the MSG_FRAGMENTER user.
Until now, we have been working around this problem in different ways,
but it is now time to implement a consistent way of handling such
priorities, although still within the constraint that we cannot
allocate any more bits in the regular data message header for this.

In this commit, we define a new priority level, TIPC_SYSTEM_IMPORTANCE,
that will be the only one used apart from the four (lower) user data
levels. All non-data messages map down to this priority. Furthermore,
we take some free bits from the MSG_FRAGMENTER header and allocate
them to store the priority of the enveloped message. We then adjust
the functions msg_importance()/msg_set_importance() so that they
read/set the correct header fields depending on user type.

This small protocol change is fully compatible, because the code at
the receiving end of a link currently reads the importance level
only from user data messages, where there is no change.

Reviewed-by: Erik Hugne <erik.hugne@ericsson.com>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>