GitHub/MotorolaMobilityLLC/kernel-slsi.git
12 years agoipv4: Really ignore ICMP address requests/replies.
David S. Miller [Mon, 23 Jul 2012 20:20:26 +0000 (13:20 -0700)]
ipv4: Really ignore ICMP address requests/replies.

Alexey removed kernel side support for requests, and the
only thing we do for replies is log a message if something
doesn't look right.

As Alexey's comment indicates, this belongs in userspace (if
anywhere), and thus we can safely just get rid of this code.

Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agodecnet: Don't set RTCF_DIRECTSRC.
David S. Miller [Mon, 23 Jul 2012 20:16:59 +0000 (13:16 -0700)]
decnet: Don't set RTCF_DIRECTSRC.

It's an ipv4 defined route flag, and only ipv4 uses it.

Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agonet/ipv4/ip_vti.c: Fix __rcu warnings detected by sparse.
Saurabh [Mon, 23 Jul 2012 07:52:04 +0000 (07:52 +0000)]
net/ipv4/ip_vti.c: Fix __rcu warnings detected by sparse.

With CONFIG_SPARSE_RCU_POINTER=y sparse identified references which did not
specificy __rcu in ip_vti.c

Signed-off-by: Saurabh Mohan <saurabh.mohan@vyatta.com>
Reported-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoipv4: Remove redundant assignment
Lin Ming [Mon, 23 Jul 2012 04:11:21 +0000 (04:11 +0000)]
ipv4: Remove redundant assignment

It is redundant to set no_addr and accept_local to 0 and then set them
with other values just after that.

Signed-off-by: Lin Ming <mlin@ss.pku.edu.cn>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agords: set correct msg_namelen
Weiping Pan [Mon, 23 Jul 2012 02:37:48 +0000 (10:37 +0800)]
rds: set correct msg_namelen

Jay Fenlason (fenlason@redhat.com) found a bug,
that recvfrom() on an RDS socket can return the contents of random kernel
memory to userspace if it was called with a address length larger than
sizeof(struct sockaddr_in).
rds_recvmsg() also fails to set the addr_len paramater properly before
returning, but that's just a bug.
There are also a number of cases wher recvfrom() can return an entirely bogus
address. Anything in rds_recvmsg() that returns a non-negative value but does
not go through the "sin = (struct sockaddr_in *)msg->msg_name;" code path
at the end of the while(1) loop will return up to 128 bytes of kernel memory
to userspace.

And I write two test programs to reproduce this bug, you will see that in
rds_server, fromAddr will be overwritten and the following sock_fd will be
destroyed.
Yes, it is the programmer's fault to set msg_namelen incorrectly, but it is
better to make the kernel copy the real length of address to user space in
such case.

How to run the test programs ?
I test them on 32bit x86 system, 3.5.0-rc7.

1 compile
gcc -o rds_client rds_client.c
gcc -o rds_server rds_server.c

2 run ./rds_server on one console

3 run ./rds_client on another console

4 you will see something like:
server is waiting to receive data...
old socket fd=3
server received data from client:data from client
msg.msg_namelen=32
new socket fd=-1067277685
sendmsg()
: Bad file descriptor

/***************** rds_client.c ********************/

int main(void)
{
int sock_fd;
struct sockaddr_in serverAddr;
struct sockaddr_in toAddr;
char recvBuffer[128] = "data from client";
struct msghdr msg;
struct iovec iov;

sock_fd = socket(AF_RDS, SOCK_SEQPACKET, 0);
if (sock_fd < 0) {
perror("create socket error\n");
exit(1);
}

memset(&serverAddr, 0, sizeof(serverAddr));
serverAddr.sin_family = AF_INET;
serverAddr.sin_addr.s_addr = inet_addr("127.0.0.1");
serverAddr.sin_port = htons(4001);

if (bind(sock_fd, (struct sockaddr*)&serverAddr, sizeof(serverAddr)) < 0) {
perror("bind() error\n");
close(sock_fd);
exit(1);
}

memset(&toAddr, 0, sizeof(toAddr));
toAddr.sin_family = AF_INET;
toAddr.sin_addr.s_addr = inet_addr("127.0.0.1");
toAddr.sin_port = htons(4000);
msg.msg_name = &toAddr;
msg.msg_namelen = sizeof(toAddr);
msg.msg_iov = &iov;
msg.msg_iovlen = 1;
msg.msg_iov->iov_base = recvBuffer;
msg.msg_iov->iov_len = strlen(recvBuffer) + 1;
msg.msg_control = 0;
msg.msg_controllen = 0;
msg.msg_flags = 0;

if (sendmsg(sock_fd, &msg, 0) == -1) {
perror("sendto() error\n");
close(sock_fd);
exit(1);
}

printf("client send data:%s\n", recvBuffer);

memset(recvBuffer, '\0', 128);

msg.msg_name = &toAddr;
msg.msg_namelen = sizeof(toAddr);
msg.msg_iov = &iov;
msg.msg_iovlen = 1;
msg.msg_iov->iov_base = recvBuffer;
msg.msg_iov->iov_len = 128;
msg.msg_control = 0;
msg.msg_controllen = 0;
msg.msg_flags = 0;
if (recvmsg(sock_fd, &msg, 0) == -1) {
perror("recvmsg() error\n");
close(sock_fd);
exit(1);
}

printf("receive data from server:%s\n", recvBuffer);

close(sock_fd);

return 0;
}

/***************** rds_server.c ********************/

int main(void)
{
struct sockaddr_in fromAddr;
int sock_fd;
struct sockaddr_in serverAddr;
unsigned int addrLen;
char recvBuffer[128];
struct msghdr msg;
struct iovec iov;

sock_fd = socket(AF_RDS, SOCK_SEQPACKET, 0);
if(sock_fd < 0) {
perror("create socket error\n");
exit(0);
}

memset(&serverAddr, 0, sizeof(serverAddr));
serverAddr.sin_family = AF_INET;
serverAddr.sin_addr.s_addr = inet_addr("127.0.0.1");
serverAddr.sin_port = htons(4000);
if (bind(sock_fd, (struct sockaddr*)&serverAddr, sizeof(serverAddr)) < 0) {
perror("bind error\n");
close(sock_fd);
exit(1);
}

printf("server is waiting to receive data...\n");
msg.msg_name = &fromAddr;

/*
 * I add 16 to sizeof(fromAddr), ie 32,
 * and pay attention to the definition of fromAddr,
 * recvmsg() will overwrite sock_fd,
 * since kernel will copy 32 bytes to userspace.
 *
 * If you just use sizeof(fromAddr), it works fine.
 * */
msg.msg_namelen = sizeof(fromAddr) + 16;
/* msg.msg_namelen = sizeof(fromAddr); */
msg.msg_iov = &iov;
msg.msg_iovlen = 1;
msg.msg_iov->iov_base = recvBuffer;
msg.msg_iov->iov_len = 128;
msg.msg_control = 0;
msg.msg_controllen = 0;
msg.msg_flags = 0;

while (1) {
printf("old socket fd=%d\n", sock_fd);
if (recvmsg(sock_fd, &msg, 0) == -1) {
perror("recvmsg() error\n");
close(sock_fd);
exit(1);
}
printf("server received data from client:%s\n", recvBuffer);
printf("msg.msg_namelen=%d\n", msg.msg_namelen);
printf("new socket fd=%d\n", sock_fd);
strcat(recvBuffer, "--data from server");
if (sendmsg(sock_fd, &msg, 0) == -1) {
perror("sendmsg()\n");
close(sock_fd);
exit(1);
}
}

close(sock_fd);
return 0;
}

Signed-off-by: Weiping Pan <wpan@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoopenvswitch: potential NULL deref in sample()
Dan Carpenter [Mon, 23 Jul 2012 07:46:28 +0000 (10:46 +0300)]
openvswitch: potential NULL deref in sample()

If there is no OVS_SAMPLE_ATTR_ACTIONS set then "acts_list" is NULL and
it leads to a NULL dereference when we call nla_len(acts_list).  This
is a static checker fix, not something I have seen in testing.

Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agotcp: dont drop MTU reduction indications
Eric Dumazet [Mon, 23 Jul 2012 07:48:52 +0000 (09:48 +0200)]
tcp: dont drop MTU reduction indications

ICMP messages generated in output path if frame length is bigger than
mtu are actually lost because socket is owned by user (doing the xmit)

One example is the ipgre_tunnel_xmit() calling
icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, htonl(mtu));

We had a similar case fixed in commit a34a101e1e6 (ipv6: disable GSO on
sockets hitting dst_allfrag).

Problem of such fix is that it relied on retransmit timers, so short tcp
sessions paid a too big latency increase price.

This patch uses the tcp_release_cb() infrastructure so that MTU
reduction messages (ICMP messages) are not lost, and no extra delay
is added in TCP transmits.

Reported-by: Maciej Żenczykowski <maze@google.com>
Diagnosed-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Nandita Dukkipati <nanditad@google.com>
Cc: Tom Herbert <therbert@google.com>
Cc: Tore Anderson <tore@fud.no>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agobnx2x: Add new 57840 device IDs
Yuval Mintz [Mon, 23 Jul 2012 07:25:43 +0000 (10:25 +0300)]
bnx2x: Add new 57840 device IDs

The 57840 boards come in two flavours: 2 x 20G and 4 x 10G.
To better differentiate between the two flavours, a separate device ID
was assigned to each.
The silicon default value is still the currently supported 57840 device ID
(0x168d), and since a user can damage the nvram (e.g., 'ethtool -E')
the driver will still support this device ID to allow the user to amend the
nvram back into a supported configuration.

Notice this patch contains lines longer than 80 characters (strings).

Signed-off-by: Yuval Mintz <yuvalmin@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agotcp: avoid oops in tcp_metrics and reset tcpm_stamp
Julian Anastasov [Mon, 23 Jul 2012 07:46:38 +0000 (10:46 +0300)]
tcp: avoid oops in tcp_metrics and reset tcpm_stamp

In tcp_tw_remember_stamp we incorrectly checked tw
instead of tm, it can lead to oops if the cached entry is
not found.

tcpm_stamp was not updated in tcpm_check_stamp when
tcpm_suck_dst was called, move the update into tcpm_suck_dst,
so that we do not call it infinitely on every next cache hit
after TCP_METRICS_TIMEOUT.

Signed-off-by: Julian Anastasov <ja@ssi.bg>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoniu: Change niu_rbr_fill() to use unlikely() to check niu_rbr_add_page() return value
Shuah Khan [Fri, 20 Jul 2012 13:34:32 +0000 (13:34 +0000)]
niu: Change niu_rbr_fill() to use unlikely() to check niu_rbr_add_page() return value

Change niu_rbr_fill() to use unlikely() to check niu_rbr_add_page() return
value to be consistent with the rest of the checks after niu_rbr_add_page()
calls in this file.

Signed-off-by: Shuah Khan <shuah.khan@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoniu: Fix to check for dma mapping errors.
Shuah Khan [Fri, 20 Jul 2012 11:50:35 +0000 (11:50 +0000)]
niu: Fix to check for dma mapping errors.

Fix Neptune ethernet driver to check dma mapping error after map_page()
interface returns.

Signed-off-by: Shuah Khan <shuah.khan@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agonet: Fix references to out-of-scope variables in put_cmsg_compat()
Jesper Juhl [Sun, 22 Jul 2012 11:37:20 +0000 (11:37 +0000)]
net: Fix references to out-of-scope variables in put_cmsg_compat()

In net/compat.c::put_cmsg_compat() we may assign 'data' the address of
either the 'ctv' or 'cts' local variables inside the 'if
(!COMPAT_USE_64BIT_TIME)' branch.

Those variables go out of scope at the end of the 'if' statement, so
when we use 'data' further down in 'copy_to_user(CMSG_COMPAT_DATA(cm),
data, cmlen - sizeof(struct compat_cmsghdr))' there's no telling what
it may be refering to - not good.

Fix the problem by simply giving 'ctv' and 'cts' function scope.

Signed-off-by: Jesper Juhl <jj@chaosbits.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoMerge branch 'kill_rtcache'
David S. Miller [Mon, 23 Jul 2012 00:04:15 +0000 (17:04 -0700)]
Merge branch 'kill_rtcache'

The ipv4 routing cache is non-deterministic, performance wise, and is
subject to reasonably easy to launch denial of service attacks.

The routing cache works great for well behaved traffic, and the world
was a much friendlier place when the tradeoffs that led to the routing
cache's design were considered.

What it boils down to is that the performance of the routing cache is
a product of the traffic patterns seen by a system rather than being a
product of the contents of the routing tables.  The former of which is
controllable by external entitites.

Even for "well behaved" legitimate traffic, high volume sites can see
hit rates in the routing cache of only ~%10.

The general flow of this patch series is that first the routing cache
is removed.  We build a completely new rtable entry every lookup
request.

Next we make some simplifications due to the fact that removing the
routing cache causes several members of struct rtable to become no
longer necessary.

Then we need to make some amends such that we can legally cache
pre-constructed routes in the FIB nexthops.  Firstly, we need to
invalidate routes which are hit with nexthop exceptions.  Secondly we
have to change the semantics of rt->rt_gateway such that zero means
that the destination is on-link and non-zero otherwise.

Now that the preparations are ready, we start caching precomputed
routes in the FIB nexthops.  Output and input routes need different
kinds of care when determining if we can legally do such caching or
not.  The details are in the commit log messages for those changes.

The patch series then winds down with some more struct rtable
simplifications and other tidy ups that remove unnecessary overhead.

On a SPARC-T3 output route lookups are ~876 cycles.  Input route
lookups are ~1169 cycles with rpfilter disabled, and about ~1468
cycles with rpfilter enabled.

These measurements were taken with the kbench_mod test module in the
net_test_tools GIT tree:

git://git.kernel.org/pub/scm/linux/kernel/git/davem/net_test_tools.git

That GIT tree also includes a udpflood tester tool and stresses
route lookups on packet output.

For example, on the same SPARC-T3 system we can run:

time ./udpflood -l 10000000 10.2.2.11

with routing cache:
real    1m21.955s       user    0m6.530s        sys     1m15.390s

without routing cache:
real    1m31.678s       user    0m6.520s        sys     1m25.140s

Performance undoubtedly can easily be improved further.

For example fib_table_lookup() performs a lot of excessive
computations with all the masking and shifting, some of it
conditionalized to deal with edge cases.

Also, Eric's no-ref optimization for input route lookups can be
re-instated for the FIB nexthop caching code path.  I would be really
pleased if someone would work on that.

In fact anyone suitable motivated can just fire up perf on the loading
of the test net_test_tools benchmark kernel module.  I spend much of
my time going:

bash# perf record insmod ./kbench_mod.ko dst=172.30.42.22 src=74.128.0.1 iif=2
bash# perf report

Thanks to helpful feedback from Joe Perches, Eric Dumazet, Ben
Hutchings, and others.

Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agonet: ethernet: davinci_emac: add pm_runtime support
Mark A. Greer [Fri, 20 Jul 2012 15:19:22 +0000 (15:19 +0000)]
net: ethernet: davinci_emac: add pm_runtime support

Add pm_runtime support to the TI Davinci EMAC driver.

CC: Sekhar Nori <nsekhar@ti.com>
CC: Kevin Hilman <khilman@ti.com>
Signed-off-by: Mark A. Greer <mgreer@animalcreek.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agonet: ethernet: davinci_emac: Remove unnecessary #include
Mark A. Greer [Fri, 20 Jul 2012 15:19:21 +0000 (15:19 +0000)]
net: ethernet: davinci_emac: Remove unnecessary #include

The '#include <mach/mux.h>' line in davinci_emac.c
causes a compile error because that header file
isn't found.  It turns out that the #include isn't
needed because the driver isn't (and shoudn't be)
touching the mux anyway, so remove it.

CC: Sekhar Nori <nsekhar@ti.com>
Signed-off-by: Mark A. Greer <mgreer@animalcreek.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agonet-next: minor cleanups for bonding documentation
Rick Jones [Fri, 20 Jul 2012 10:51:37 +0000 (10:51 +0000)]
net-next: minor cleanups for bonding documentation

The section titled "Configuring Bonding for Maximum Throughput" is
actually section twelve not thirteen, and there are a couple of words
spelled incorrectly.

Signed-off-by: Rick Jones <rick.jones2@hp.com>
Reviewed-by: Nicolas de Pesloüan <nicolas.2p.debian@free.fr>
Signed-off-by: Jay Vosburgh <fubar@us.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agonet: netprio_cgroup: rework update socket logic
John Fastabend [Fri, 20 Jul 2012 10:39:25 +0000 (10:39 +0000)]
net: netprio_cgroup: rework update socket logic

Instead of updating the sk_cgrp_prioidx struct field on every send
this only updates the field when a task is moved via cgroup
infrastructure.

This allows sockets that may be used by a kernel worker thread
to be managed. For example in the iscsi case today a user can
put iscsid in a netprio cgroup and control traffic will be sent
with the correct sk_cgrp_prioidx value set but as soon as data
is sent the kernel worker thread isssues a send and sk_cgrp_prioidx
is updated with the kernel worker threads value which is the
default case.

It seems more correct to only update the field when the user
explicitly sets it via control group infrastructure. This allows
the users to manage sockets that may be used with other threads.

Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agotun: experimental zero copy tx support
Michael S. Tsirkin [Fri, 20 Jul 2012 09:23:23 +0000 (09:23 +0000)]
tun: experimental zero copy tx support

Let vhost-net utilize zero copy tx when used with tun.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoskbuff: export skb_copy_ubufs
Michael S. Tsirkin [Fri, 20 Jul 2012 09:23:20 +0000 (09:23 +0000)]
skbuff: export skb_copy_ubufs

Export skb_copy_ubufs so that modules can orphan frags.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agonet: orphan frags on receive
Michael S. Tsirkin [Fri, 20 Jul 2012 09:23:17 +0000 (09:23 +0000)]
net: orphan frags on receive

zero copy packets are normally sent to the outside
network, but bridging, tun etc might loop them
back to host networking stack. If this happens
destructors will never be called, so orphan
the frags immediately on receive.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agotun: orphan frags on xmit
Michael S. Tsirkin [Fri, 20 Jul 2012 09:23:14 +0000 (09:23 +0000)]
tun: orphan frags on xmit

tun xmit is actually receive of the internal tun
socket. Orphan the frags same as we do for normal rx path.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoskbuff: convert to skb_orphan_frags
Michael S. Tsirkin [Fri, 20 Jul 2012 09:23:10 +0000 (09:23 +0000)]
skbuff: convert to skb_orphan_frags

Reduce code duplication a bit using the new helper.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoskbuff: add an api to orphan frags
Michael S. Tsirkin [Fri, 20 Jul 2012 09:23:07 +0000 (09:23 +0000)]
skbuff: add an api to orphan frags

Many places do
       if ((skb_shinfo(skb)->tx_flags & SKBTX_DEV_ZEROCOPY))
skb_copy_ubufs(skb, gfp_mask);
to copy and invoke frag destructors if necessary.
Add an inline helper for this.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoixgbe: Fix build with PCI_IOV enabled.
David S. Miller [Sun, 22 Jul 2012 19:36:41 +0000 (12:36 -0700)]
ixgbe: Fix build with PCI_IOV enabled.

Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoforcedeth: advertise transmit time stamping
Richard Cochran [Sun, 22 Jul 2012 07:15:42 +0000 (07:15 +0000)]
forcedeth: advertise transmit time stamping

This driver now offers software transmit time stamping, so it should
advertise that fact via ethtool. Compile tested only.

Signed-off-by: Richard Cochran <richardcochran@gmail.com>
Cc: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoe1000e: advertise transmit time stamping
Richard Cochran [Sun, 22 Jul 2012 07:15:41 +0000 (07:15 +0000)]
e1000e: advertise transmit time stamping

This driver now offers software transmit time stamping, so it should
advertise that fact via ethtool. Compile tested only.

Signed-off-by: Richard Cochran <richardcochran@gmail.com>
Cc: Willem de Bruijn <willemb@google.com>
Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Cc: e1000-devel@lists.sourceforge.net
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoe1000: advertise transmit time stamping
Richard Cochran [Sun, 22 Jul 2012 07:15:40 +0000 (07:15 +0000)]
e1000: advertise transmit time stamping

This driver now offers software transmit time stamping, so it should
advertise that fact via ethtool. Compile tested only.

Signed-off-by: Richard Cochran <richardcochran@gmail.com>
Cc: Willem de Bruijn <willemb@google.com>
Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Cc: e1000-devel@lists.sourceforge.net
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agobnx2x: advertise transmit time stamping
Richard Cochran [Sun, 22 Jul 2012 07:15:39 +0000 (07:15 +0000)]
bnx2x: advertise transmit time stamping

This driver now offers software transmit time stamping, so it should
advertise that fact via ethtool. Compile tested only.

Signed-off-by: Richard Cochran <richardcochran@gmail.com>
Cc: Eilon Greenstein <eilong@broadcom.com>
Cc: Willem de Bruijn <willemb@google.com>
Acked-by: Eilon Greenstein <eilong@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agortnl: Add #ifdef CONFIG_RPS around num_rx_queues reference
Mark A. Greer [Fri, 20 Jul 2012 13:35:13 +0000 (13:35 +0000)]
rtnl: Add #ifdef CONFIG_RPS around num_rx_queues reference

Commit 76ff5cc91935c51fcf1a6a99ffa28b97a6e7a884
(rtnl: allow to specify number of rx and tx queues
on device creation) added a reference to the net_device
structure's 'num_rx_queues' member in

net/core/rtnetlink.c:rtnl_fill_ifinfo()

However, the definition for 'num_rx_queues' is surrounded
by an '#ifdef CONFIG_RPS' while the new reference to it is
not.  This causes a compile error when CONFIG_RPS is not
defined.

Fix the compile error by surrounding the new reference to
'num_rx_queues' by an '#ifdef CONFIG_RPS'.

CC: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Mark A. Greer <mgreer@animalcreek.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoMerge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net...
David S. Miller [Sun, 22 Jul 2012 19:23:18 +0000 (12:23 -0700)]
Merge branch 'master' of git://git./linux/kernel/git/jkirsher/net-next

Jeff Kirsher says:

--------------------
This series contains updates to ixgbe and ixgbevf.
 ...
Akeem G. Abodunrin (1):
  igb: reset PHY in the link_up process to recover PHY setting after
    power down.

Alexander Duyck (8):
  ixgbe: Drop probe_vf and merge functionality into ixgbe_enable_sriov
  ixgbe: Change how we check for pre-existing and assigned VFs
  ixgbevf: Add lock around mailbox ops to prevent simultaneous access
  ixgbevf: Add support for PCI error handling
  ixgbe: Fix handling of FDIR_HASH flag
  ixgbe: Reduce Rx header size to what is actually used
  ixgbe: Use num_tcs.pg_tcs as upper limit for TC when checking based
    on UP
  ixgbe: Use 1TC DCB instead of disabling DCB for MSI and legacy
    interrupts

Don Skidmore (1):
  ixgbe: add support for new 82599 device

Greg Rose (1):
  ixgbevf: Fix namespace issue with ixgbe_write_eitr

John Fastabend (2):
  ixgbe: fix RAR entry counting for generic and fdb_add()
  ixgbe: remove extra unused queues in DCB + FCoE case
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoMerge branch 'vhost-net-next' of git://git.kernel.org/pub/scm/linux/kernel/git/mst...
David S. Miller [Sun, 22 Jul 2012 19:19:24 +0000 (12:19 -0700)]
Merge branch 'vhost-net-next' of git://git./linux/kernel/git/mst/vhost

12 years agowimax: fix printk format warnings
Randy Dunlap [Sat, 21 Jul 2012 10:54:35 +0000 (10:54 +0000)]
wimax: fix printk format warnings

Fix printk format warnings in drivers/net/wimax/i2400m:

drivers/net/wimax/i2400m/control.c: warning: format '%zu' expects argument of type 'size_t', but argument 4 has type 'ssize_t' [-Wformat]
drivers/net/wimax/i2400m/control.c: warning: format '%zu' expects argument of type 'size_t', but argument 5 has type 'ssize_t' [-Wformat]
drivers/net/wimax/i2400m/usb-fw.c: warning: format '%zu' expects argument of type 'size_t', but argument 4 has type 'ssize_t' [-Wformat]

I don't see these warnings on x86.  The warnings that are quoted above
are from Geert's kernel build reports.

Signed-off-by: Randy Dunlap <rdunlap@xenotime.net>
Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Inaky Perez-Gonzalez <inaky.perez-gonzalez@intel.com>
Cc: linux-wimax@intel.com
Cc: wimax@linuxwimax.org
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agosctp: Implement quick failover draft from tsvwg
Neil Horman [Sat, 21 Jul 2012 07:56:07 +0000 (07:56 +0000)]
sctp: Implement quick failover draft from tsvwg

I've seen several attempts recently made to do quick failover of sctp transports
by reducing various retransmit timers and counters.  While its possible to
implement a faster failover on multihomed sctp associations, its not
particularly robust, in that it can lead to unneeded retransmits, as well as
false connection failures due to intermittent latency on a network.

Instead, lets implement the new ietf quick failover draft found here:
http://tools.ietf.org/html/draft-nishida-tsvwg-sctp-failover-05

This will let the sctp stack identify transports that have had a small number of
errors, and avoid using them quickly until their reliability can be
re-established.  I've tested this out on two virt guests connected via multiple
isolated virt networks and believe its in compliance with the above draft and
works well.

Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
CC: Vlad Yasevich <vyasevich@gmail.com>
CC: Sridhar Samudrala <sri@us.ibm.com>
CC: "David S. Miller" <davem@davemloft.net>
CC: linux-sctp@vger.kernel.org
CC: joe@perches.com
Acked-by: Vlad Yasevich <vyasevich@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agonet: fix race condition in several drivers when reading stats
Kevin Groeneveld [Sat, 21 Jul 2012 06:30:50 +0000 (06:30 +0000)]
net: fix race condition in several drivers when reading stats

Fix race condition in several network drivers when reading stats on 32bit
UP architectures.  These drivers update their stats in a BH context and
therefore should use u64_stats_fetch_begin_bh/u64_stats_fetch_retry_bh
instead of u64_stats_fetch_begin/u64_stats_fetch_retry when reading the
stats.

Signed-off-by: Kevin Groeneveld <kgroeneveld@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoipv4: tcp: set unicast_sock uc_ttl to -1
Eric Dumazet [Fri, 20 Jul 2012 22:28:51 +0000 (22:28 +0000)]
ipv4: tcp: set unicast_sock uc_ttl to -1

Set unicast_sock uc_ttl to -1 so that we select the right ttl,
instead of sending packets with a 0 ttl.

Bug added in commit be9f4a44e7d4 (ipv4: tcp: remove per net tcp_sock)

Signed-off-by: Hiroaki SHIMODA <shimoda.hiroaki@gmail.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoigb: reset PHY in the link_up process to recover PHY setting after power down.
Akeem G. Abodunrin [Tue, 17 Jul 2012 04:51:18 +0000 (04:51 +0000)]
igb: reset PHY in the link_up process to recover PHY setting after power down.

There was a previous patch to resolve issue with 82576 losing PHY setting
after PHY power down. However that previous implementation triggered speed
mismatch and occasional link lost. Now, this patch resolves both initial
PHY setting and speed mismatch issues.

Signed-off-by: Akeem G. Abodunrin <akeem.g.abodunrin@intel.com>
Tested-by: Jeff Pieper <jeffrey.e.pieper@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
12 years agoixgbe: Use 1TC DCB instead of disabling DCB for MSI and legacy interrupts
Alexander Duyck [Tue, 17 Jul 2012 01:20:28 +0000 (01:20 +0000)]
ixgbe: Use 1TC DCB instead of disabling DCB for MSI and legacy interrupts

This change makes it so that we can use 1TC DCB in the case of MSI and
legacy interrupts.  The advantage to this is that it allows us to fully
support FCoE w/ DCB instead of having to drop to link flow control only
when using these interrupt modes.

Cc: John Fastabend <john.r.fastabend@intel.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
12 years agoixgbe: add support for new 82599 device
Don Skidmore [Wed, 11 Jul 2012 07:17:42 +0000 (07:17 +0000)]
ixgbe: add support for new 82599 device

This patch adds support for a new 82599 device that supports WoL.

Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
12 years agoixgbe: remove extra unused queues in DCB + FCoE case
John Fastabend [Tue, 5 Jun 2012 05:58:52 +0000 (05:58 +0000)]
ixgbe: remove extra unused queues in DCB + FCoE case

With DCB and FCoE configured extra queues may be allocated and
never used. After this patch we calculate the max correctly.

Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
12 years agoixgbe: fix RAR entry counting for generic and fdb_add()
John Fastabend [Thu, 31 May 2012 12:42:26 +0000 (12:42 +0000)]
ixgbe: fix RAR entry counting for generic and fdb_add()

Do RAR entry accounting correctly so that errors are reported and
promisc mode is set correctly when the number of entries exceeds
the hardware limits.

This can happen with many macvlan devices attached to the PF or
by adding many fdb entries in SR-IOV modes.

Also this includes a small refactor to fdb_add() to avoid having so
many nested if/else statements after adding a check for the number
or RAR entries.

The max entries for the PF is currently 16 we allow 15 additional
entries to account for the defined MAC.

Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
12 years agoixgbe: Use num_tcs.pg_tcs as upper limit for TC when checking based on UP
Alexander Duyck [Fri, 25 May 2012 01:45:38 +0000 (01:45 +0000)]
ixgbe: Use num_tcs.pg_tcs as upper limit for TC when checking based on UP

This change makes it so the function ixgbe_dcb_get_tc_from_up will use the
num_tcs.pg_tcs to determine the starting value for determining a traffic
class based on a user priority.  The main motivation for this change is to
address possible bad configurations in which more TCs worth of data are
populated then there are actual TCs.  By limiting this value we can at
least make certain we are not providing a map with values that are out of
range.

As a result any user priorities that are setup in the configuration with a
traffic class mapping higher than what the hardware supports will be
reported as being on TC 0.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Acked-by: Greg Rose <gregory.v.rose@intel.com>
Tested-by: Stephen Ko <stephen.s.ko@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
12 years agoixgbe: Reduce Rx header size to what is actually used
Alexander Duyck [Thu, 24 May 2012 01:59:27 +0000 (01:59 +0000)]
ixgbe: Reduce Rx header size to what is actually used

The recent changes to netdev_alloc_skb actually make it so that the size of
the buffer now actually has a more direct input on the truesize.  So in
order to make best use of the piece of a page we are allocated I am
reducing the IXGBE_RX_HDR_SIZE to 256 so that our truesize will be reduced
by 256 bytes as well.

This should result in performance improvements since the number of uses per
page should increase from 4 to 6 in the case of a 4K page.  In addition we
should see socket performance improvements due to the truesize dropping
to less than 1K for buffers less than 256 bytes.

Cc: Eric Dumazet <edumazet@google.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
12 years agoixgbevf: Fix namespace issue with ixgbe_write_eitr
Greg Rose [Tue, 22 May 2012 02:17:49 +0000 (02:17 +0000)]
ixgbevf: Fix namespace issue with ixgbe_write_eitr

Make the function static to cleanup namespace.

Signed-off-by: Greg Rose <gregory.v.rose@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Tested-by: Sibai Li <Sibai.li@intel.com
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
12 years agoixgbe: Fix handling of FDIR_HASH flag
Alexander Duyck [Wed, 6 Jun 2012 05:38:20 +0000 (05:38 +0000)]
ixgbe: Fix handling of FDIR_HASH flag

This change makes it so that we can use the atr_sample_rate to determine if
we are capable of supporting ATR. The advantage to this approach is that it
allows us to now determine the setting of the IXGBE_FLAG_FDIR_HASH_CAPABLE
based on the queueing scheme, instead of the queueing scheme being based on
the flag.

Using this approach there are essentially 5 conditions that must be checked
prior to trying to enable ATR:
1.  Is SR-IOV disabled?
2.  Are the number of TCs <= 1?
3.  Is RSS queueing limit greater than 1?
4.  Is atr_sample_rate set?
5.  Is Flow Director perfect filtering disabled?

If any of these conditions are enabled they should disable ATR filtering.
Note that in the case of conditions 1 through 4 being met we will set
things up for ATR queueing, however if test 5 fails we will still leave the
queues allocated for use by perfect filters.  The reason for this is to
allow for us to switch back and forth between ntuple and ATR without
needing to reallocate the descriptor rings.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
12 years agoixgbevf: Add support for PCI error handling
Alexander Duyck [Fri, 11 May 2012 08:33:32 +0000 (08:33 +0000)]
ixgbevf: Add support for PCI error handling

This change adds support for handling IO errors and slot resets.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Greg Rose <gregory.v.rose@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
12 years agoixgbevf: Add lock around mailbox ops to prevent simultaneous access
Alexander Duyck [Fri, 11 May 2012 08:33:06 +0000 (08:33 +0000)]
ixgbevf: Add lock around mailbox ops to prevent simultaneous access

This change adds a spinlock around the mailbox accesses to prevent
simultaneous access to the mailboxes.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Greg Rose <gregory.v.rose@intel.com>
Tested-by: Sibai Li <sibai.li@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
12 years agoixgbe: Change how we check for pre-existing and assigned VFs
Alexander Duyck [Wed, 23 May 2012 02:58:40 +0000 (02:58 +0000)]
ixgbe: Change how we check for pre-existing and assigned VFs

This patch does two things.  First it drops the unnecessary work of
searching for enabled VFs when we first bring up the adapter and instead
just uses pci_num_vf to determine how many VFs are enabled on the adapter.

The second thing it does is drop the use of vfdev from the vf_data_storage
structure.  Instead we just search the entire system for a VF that has us
as it's PF, and then if that VF is assigned we indicate that the VFs are
assigned.  This allows us to still check for assigned VFs even if the
vfinfo allocation has failed, or vfinfo has been freed.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Acked-by: Greg Rose <gregory.v.rose@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Tested-by: Sibai Li <sibai.li@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
12 years agoixgbe: Drop probe_vf and merge functionality into ixgbe_enable_sriov
Alexander Duyck [Wed, 9 May 2012 08:09:25 +0000 (08:09 +0000)]
ixgbe: Drop probe_vf and merge functionality into ixgbe_enable_sriov

This is meant to fix a bug in which we were not checking for pre-existing
VFs if we were not setting the max_vfs value at driver load.  What happens
now is that we always call ixgbe_enable_sriov and this checks for
pre-existing VFs ore requested VFs prior to deciding on no SR-IOV.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Tested-by: Sibai Li <sibai.li@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
12 years agovhost: make vhost work queue visible
Stefan Hajnoczi [Sat, 21 Jul 2012 06:55:37 +0000 (06:55 +0000)]
vhost: make vhost work queue visible

The vhost work queue allows processing to be done in vhost worker thread
context, which uses the owner process mm.  Access to the vring and guest
memory is typically only possible from vhost worker context so it is
useful to allow work to be queued directly by users.

Currently vhost_net only uses the poll wrappers which do not expose the
work queue functions.  However, for tcm_vhost (vhost_scsi) it will be
necessary to queue custom work.

Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
Cc: Zhi Yong Wu <wuzhy@cn.ibm.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
12 years agovhost: Separate vhost-net features from vhost features
Stefan Hajnoczi [Sat, 21 Jul 2012 06:55:36 +0000 (06:55 +0000)]
vhost: Separate vhost-net features from vhost features

In order for other vhost devices to use the VHOST_FEATURES bits the
vhost-net specific bits need to be moved to their own VHOST_NET_FEATURES
constant.

(Asias: Update drivers/vhost/test.c to use VHOST_NET_FEATURES)

Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
Cc: Zhi Yong Wu <wuzhy@cn.ibm.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Asias He <asias@redhat.com>
Signed-off-by: Nicholas A. Bellinger <nab@risingtidesystems.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
12 years agoforcedeth: spin_unlock_irq in interrupt handler fix
Denis Efremov [Fri, 20 Jul 2012 21:54:34 +0000 (01:54 +0400)]
forcedeth: spin_unlock_irq in interrupt handler fix

The replacement of spin_lock_irq/spin_unlock_irq pair in interrupt
handler by spin_lock_irqsave/spin_lock_irqrestore pair.

Found by Linux Driver Verification project (linuxtesting.org).

Signed-off-by: Denis Efremov <yefremov.denis@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoMerge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/jesse/openvswitch
David S. Miller [Fri, 20 Jul 2012 23:16:34 +0000 (16:16 -0700)]
Merge branch 'master' of git://git./linux/kernel/git/jesse/openvswitch

Jesse Gross says:

====================
A few bug fixes and small enhancements for net-next/3.6.
 ...
Ansis Atteka (1):
      openvswitch: Do not send notification if ovs_vport_set_options() failed

Ben Pfaff (1):
      openvswitch: Check gso_type for correct sk_buff in queue_gso_packets().

Jesse Gross (2):
      openvswitch: Enable retrieval of TCP flags from IPv6 traffic.
      openvswitch: Reset upper layer protocol info on internal devices.

Leo Alterman (1):
      openvswitch: Fix typo in documentation.

Pravin B Shelar (1):
      openvswitch: Check currect return value from skb_gso_segment()

Raju Subramanian (1):
      openvswitch: Replace Nicira Networks.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoipv4: Fix neigh lookup keying over loopback/point-to-point devices.
David S. Miller [Fri, 20 Jul 2012 23:00:53 +0000 (16:00 -0700)]
ipv4: Fix neigh lookup keying over loopback/point-to-point devices.

We were using a special key "0" for all loopback and point-to-point
device neigh lookups under ipv4, but we wouldn't use that special
key for the neigh creation.

So basically we'd make a new neigh at each and every lookup :-)

This special case to use only one neigh for these device types
is of dubious value, so just remove it entirely.

Reported-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoopenvswitch: Fix typo in documentation.
Leo Alterman [Fri, 20 Jul 2012 21:51:07 +0000 (14:51 -0700)]
openvswitch: Fix typo in documentation.

Signed-off-by: Leo Alterman <lalterman@nicira.com>
Signed-off-by: Jesse Gross <jesse@nicira.com>
12 years agoopenvswitch: Check gso_type for correct sk_buff in queue_gso_packets().
Ben Pfaff [Fri, 20 Jul 2012 21:47:54 +0000 (14:47 -0700)]
openvswitch: Check gso_type for correct sk_buff in queue_gso_packets().

At the point where it was used, skb_shinfo(skb)->gso_type referred to a
post-GSO sk_buff.  Thus, it would always be 0.  We want to know the pre-GSO
gso_type, so we need to obtain it before segmenting.

Before this change, the kernel would pass inconsistent data to userspace:
packets for UDP fragments with nonzero offset would be passed along with
flow keys that indicate a zero offset (that is, the flow key for "later"
fragments claimed to be "first" fragments).  This inconsistency tended
to confuse Open vSwitch userspace, causing it to log messages about
"failed to flow_del" the flows with "later" fragments.

Signed-off-by: Ben Pfaff <blp@nicira.com>
Signed-off-by: Jesse Gross <jesse@nicira.com>
12 years agoopenvswitch: Check currect return value from skb_gso_segment()
Pravin B Shelar [Fri, 20 Jul 2012 21:46:29 +0000 (14:46 -0700)]
openvswitch: Check currect return value from skb_gso_segment()

Fix return check typo.

Signed-off-by: Pravin B Shelar <pshelar@nicira.com>
Signed-off-by: Jesse Gross <jesse@nicira.com>
12 years agoipv4: Kill rt->fi
David S. Miller [Tue, 17 Jul 2012 21:55:59 +0000 (14:55 -0700)]
ipv4: Kill rt->fi

It's not really needed.

We only grabbed a reference to the fib_info for the sake of fib_info
local metrics.

However, fib_info objects are freed using RCU, as are therefore their
private metrics (if any).

We would have triggered a route cache flush if we eliminated a
reference to a fib_info object in the routing tables.

Therefore, any existing cached routes will first check and see that
they have been invalidated before an errant reference to these
metric values would occur.

Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoipv4: Turn rt->rt_route_iif into rt->rt_is_input.
David S. Miller [Tue, 17 Jul 2012 21:44:26 +0000 (14:44 -0700)]
ipv4: Turn rt->rt_route_iif into rt->rt_is_input.

That is this value's only use, as a boolean to indicate whether
a route is an input route or not.

So implement it that way, using a u16 gap present in the struct
already.

Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoipv4: Kill rt->rt_oif
David S. Miller [Tue, 17 Jul 2012 21:39:44 +0000 (14:39 -0700)]
ipv4: Kill rt->rt_oif

Never actually used.

It was being set on output routes to the original OIF specified in the
flow key used for the lookup.

Adjust the only user, ipmr_rt_fib_lookup(), for greater correctness of
the flowi4_oif and flowi4_iif values, thanks to feedback from Julian
Anastasov.

Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoipv4: Dirty less cache lines in route caching paths.
David S. Miller [Tue, 17 Jul 2012 21:09:39 +0000 (14:09 -0700)]
ipv4: Dirty less cache lines in route caching paths.

Don't bother incrementing dst->__use and setting dst->lastuse,
they are completely pointless and just slow things down.

Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoipv4: Kill FLOWI_FLAG_RT_NOCACHE and associated code.
David S. Miller [Tue, 17 Jul 2012 21:02:46 +0000 (14:02 -0700)]
ipv4: Kill FLOWI_FLAG_RT_NOCACHE and associated code.

Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoipv4: Cache input routes in fib_info nexthops.
David S. Miller [Tue, 17 Jul 2012 19:58:50 +0000 (12:58 -0700)]
ipv4: Cache input routes in fib_info nexthops.

Caching input routes is slightly simpler than output routes, since we
don't need to be concerned with nexthop exceptions.  (locally
destined, and routed packets, never trigger PMTU events or redirects
that will be processed by us).

However, we have to elide caching for the DIRECTSRC and non-zero itag
cases.

Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoipv4: Cache output routes in fib_info nexthops.
David S. Miller [Tue, 17 Jul 2012 19:20:47 +0000 (12:20 -0700)]
ipv4: Cache output routes in fib_info nexthops.

If we have an output route that lacks nexthop exceptions, we can cache
it in the FIB info nexthop.

Such routes will have DST_HOST cleared because such routes refer to a
family of destinations, rather than just one.

The sequence of the handling of exceptions during route lookup is
adjusted to make the logic work properly.

Before we allocate the route, we lookup the exception.

Then we know if we will cache this route or not, and therefore whether
DST_HOST should be set on the allocated route.

Then we use DST_HOST to key off whether we should store the resulting
route, during rt_set_nexthop(), in the FIB nexthop cache.

With help from Eric Dumazet.

Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoipv4: Kill routes during PMTU/redirect updates.
David S. Miller [Tue, 17 Jul 2012 18:31:28 +0000 (11:31 -0700)]
ipv4: Kill routes during PMTU/redirect updates.

Mark them obsolete so there will be a re-lookup to fetch the
FIB nexthop exception info.

Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agonet: Document dst->obsolete better.
David S. Miller [Thu, 19 Jul 2012 19:31:33 +0000 (12:31 -0700)]
net: Document dst->obsolete better.

Add a big comment explaining how the field works, and use defines
instead of magic constants for the values assigned to it.

Suggested by Joe Perches.

Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoipv4: Adjust semantics of rt->rt_gateway.
David S. Miller [Fri, 13 Jul 2012 12:03:45 +0000 (05:03 -0700)]
ipv4: Adjust semantics of rt->rt_gateway.

In order to allow prefixed routes, we have to adjust how rt_gateway
is set and interpreted.

The new interpretation is:

1) rt_gateway == 0, destination is on-link, nexthop is iph->daddr

2) rt_gateway != 0, destination requires a nexthop gateway

Abstract the fetching of the proper nexthop value using a new
inline helper, rt_nexthop(), as suggested by Joe Perches.

Signed-off-by: David S. Miller <davem@davemloft.net>
Tested-by: Vijay Subramanian <subramanian.vijay@gmail.com>
12 years agoipv4: Remove 'rt_dst' from 'struct rtable'
David S. Miller [Thu, 12 Jul 2012 17:10:17 +0000 (10:10 -0700)]
ipv4: Remove 'rt_dst' from 'struct rtable'

Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoipv4: Remove 'rt_mark' from 'struct rtable'
David Miller [Sun, 1 Jul 2012 02:03:01 +0000 (02:03 +0000)]
ipv4: Remove 'rt_mark' from 'struct rtable'

Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoipv4: Kill 'rt_src' from 'struct rtable'
David Miller [Sun, 1 Jul 2012 02:02:59 +0000 (02:02 +0000)]
ipv4: Kill 'rt_src' from 'struct rtable'

Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoipv4: Remove rt_key_{src,dst,tos} from struct rtable.
David Miller [Sun, 1 Jul 2012 02:02:56 +0000 (02:02 +0000)]
ipv4: Remove rt_key_{src,dst,tos} from struct rtable.

They are always used in contexts where they can be reconstituted,
or where the finally resolved rt->rt_{src,dst} is semantically
equivalent.

Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoipv4: Kill ip_route_input_noref().
David Miller [Sun, 1 Jul 2012 02:02:53 +0000 (02:02 +0000)]
ipv4: Kill ip_route_input_noref().

The "noref" argument to ip_route_input_common() is now always ignored
because we do not cache routes, and in that case we must always grab
a reference to the resulting 'dst'.

Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoipv4: Delete routing cache.
David S. Miller [Tue, 17 Jul 2012 18:00:09 +0000 (11:00 -0700)]
ipv4: Delete routing cache.

The ipv4 routing cache is non-deterministic, performance wise, and is
subject to reasonably easy to launch denial of service attacks.

The routing cache works great for well behaved traffic, and the world
was a much friendlier place when the tradeoffs that led to the routing
cache's design were considered.

What it boils down to is that the performance of the routing cache is
a product of the traffic patterns seen by a system rather than being a
product of the contents of the routing tables.  The former of which is
controllable by external entitites.

Even for "well behaved" legitimate traffic, high volume sites can see
hit rates in the routing cache of only ~%10.

Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoatl1c: fix issue of io access mode for AR8152 v2.1
Cloud Ren [Thu, 19 Jul 2012 17:01:58 +0000 (17:01 +0000)]
atl1c: fix issue of io access mode for AR8152 v2.1

When io access mode is enabled by BOOTROM or BIOS for AR8152 v2.1,
the register can't be read/write by memory access mode.
Clearing Bit 8  of Register 0x21c could fixed the issue.

Signed-off-by: Cloud Ren <cjren@qca.qualcomm.com>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: xiong <xiong@qca.qualcomm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agotun: fix a crash bug and a memory leak
Mikulas Patocka [Thu, 19 Jul 2012 06:13:36 +0000 (06:13 +0000)]
tun: fix a crash bug and a memory leak

This patch fixes a crash
tun_chr_close -> netdev_run_todo -> tun_free_netdev -> sk_release_kernel ->
sock_release -> iput(SOCK_INODE(sock))
introduced by commit 1ab5ecb90cb6a3df1476e052f76a6e8f6511cb3d

The problem is that this socket is embedded in struct tun_struct, it has
no inode, iput is called on invalid inode, which modifies invalid memory
and optionally causes a crash.

sock_release also decrements sockets_in_use, this causes a bug that
"sockets: used" field in /proc/*/net/sockstat keeps on decreasing when
creating and closing tun devices.

This patch introduces a flag SOCK_EXTERNALLY_ALLOCATED that instructs
sock_release to not free the inode and not decrement sockets_in_use,
fixing both memory corruption and sockets_in_use underflow.

It should be backported to 3.3 an 3.4 stabke.

Signed-off-by: Mikulas Patocka <mikulas@artax.karlin.mff.cuni.cz>
Cc: stable@kernel.org
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoipv4: show pmtu in route list
Julian Anastasov [Fri, 20 Jul 2012 09:02:08 +0000 (12:02 +0300)]
ipv4: show pmtu in route list

Override the metrics with rt_pmtu

Signed-off-by: Julian Anastasov <ja@ssi.bg>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoMerge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net...
David S. Miller [Fri, 20 Jul 2012 18:11:59 +0000 (11:11 -0700)]
Merge branch 'master' of git://git./linux/kernel/git/jkirsher/net-next

Jerr Kirsher says:

====================
This series contains updates to ixgbe.
 ...
Alexander Duyck (9):
  ixgbe: Use VMDq offset to indicate the default pool
  ixgbe: Fix memory leak when SR-IOV VFs are direct assigned
  ixgbe: Drop references to deprecated pci_ DMA api and instead use
    dma_ API
  ixgbe: Cleanup configuration of FCoE registers
  ixgbe: Merge all FCoE percpu values into a single structure
  ixgbe: Make FCoE allocation and configuration closer to how rings
    work
  ixgbe: Correctly set SAN MAC RAR pool to default pool of PF
  ixgbe: Only enable anti-spoof on VF pools
  ixgbe: Enable FCoE FSO and CRC offloads based on CAPABLE instead of
    ENABLED flag
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoMerge branch 'team_multiq'
David S. Miller [Fri, 20 Jul 2012 18:07:37 +0000 (11:07 -0700)]
Merge branch 'team_multiq'

Jiri Pirko says:

====================
This patchset represents the way I walked when I was adding multiqueue
support for team driver.

Jiri Pirko (6):
  net: honour netif_set_real_num_tx_queues() retval
  rtnl: allow to specify different num for rx and tx queue count
  rtnl: allow to specify number of rx and tx queues on device creation
  net: rename bond_queue_mapping to slave_dev_queue_mapping
  bond_sysfs: use ream_num_tx_queues rather than params.tx_queue
  team: add multiqueue support
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoteam: add multiqueue support
Jiri Pirko [Fri, 20 Jul 2012 02:28:51 +0000 (02:28 +0000)]
team: add multiqueue support

Largely copied from bonding code.

Signed-off-by: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agobond_sysfs: use real_num_tx_queues rather than params.tx_queue
Jiri Pirko [Fri, 20 Jul 2012 02:28:50 +0000 (02:28 +0000)]
bond_sysfs: use real_num_tx_queues rather than params.tx_queue

Since now number of tx queues can be specified during bond instance
creation and therefore it may differ from params.tx_queues, use rather
real_num_tx_queues for boundary check.

Signed-off-by: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agonet: rename bond_queue_mapping to slave_dev_queue_mapping
Jiri Pirko [Fri, 20 Jul 2012 02:28:49 +0000 (02:28 +0000)]
net: rename bond_queue_mapping to slave_dev_queue_mapping

As this is going to be used not only by bonding.

Signed-off-by: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agortnl: allow to specify number of rx and tx queues on device creation
Jiri Pirko [Fri, 20 Jul 2012 02:28:48 +0000 (02:28 +0000)]
rtnl: allow to specify number of rx and tx queues on device creation

This patch introduces IFLA_NUM_TX_QUEUES and IFLA_NUM_RX_QUEUES by
which userspace can set number of rx and/or tx queues to be allocated
for newly created netdevice.
This overrides ops->get_num_[tr]x_queues()

Signed-off-by: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agortnl: allow to specify different num for rx and tx queue count
Jiri Pirko [Fri, 20 Jul 2012 02:28:47 +0000 (02:28 +0000)]
rtnl: allow to specify different num for rx and tx queue count

Also cut out unused function parameters and possible err in return
value.

Signed-off-by: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agonet: honour netif_set_real_num_tx_queues() retval
Jiri Pirko [Fri, 20 Jul 2012 02:28:46 +0000 (02:28 +0000)]
net: honour netif_set_real_num_tx_queues() retval

In netif_copy_real_num_queues() the return value of
netif_set_real_num_tx_queues() should be checked.

Signed-off-by: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agotcp: improve latencies of timer triggered events
Eric Dumazet [Fri, 20 Jul 2012 05:45:50 +0000 (05:45 +0000)]
tcp: improve latencies of timer triggered events

Modern TCP stack highly depends on tcp_write_timer() having a small
latency, but current implementation doesn't exactly meet the
expectations.

When a timer fires but finds the socket is owned by the user, it rearms
itself for an additional delay hoping next run will be more
successful.

tcp_write_timer() for example uses a 50ms delay for next try, and it
defeats many attempts to get predictable TCP behavior in term of
latencies.

Use the recently introduced tcp_release_cb(), so that the user owning
the socket will call various handlers right before socket release.

This will permit us to post a followup patch to address the
tcp_tso_should_defer() syndrome (some deferred packets have to wait
RTO timer to be transmitted, while cwnd should allow us to send them
sooner)

Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Tom Herbert <therbert@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Cc: Nandita Dukkipati <nanditad@google.com>
Cc: H.K. Jerry Chu <hkchu@google.com>
Cc: John Heffner <johnwheffner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agotcp: fix ABC in tcp_slow_start()
Eric Dumazet [Fri, 20 Jul 2012 05:02:33 +0000 (05:02 +0000)]
tcp: fix ABC in tcp_slow_start()

When/if sysctl_tcp_abc > 1, we expect to increase cwnd by 2 if the
received ACK acknowledges more than 2*MSS bytes, in tcp_slow_start()

Problem is this RFC 3465 statement is not correctly coded, as
the while () loop increases snd_cwnd one by one.

Add a new variable to avoid this off-by one error.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Tom Herbert <therbert@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Cc: Nandita Dukkipati <nanditad@google.com>
Cc: John Heffner <johnwheffner@gmail.com>
Cc: Stephen Hemminger <shemminger@vyatta.com>
Acked-by: Yuchung Cheng <ycheng@google.com>
Acked-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agotcp: use hash_32() in tcp_metrics
Eric Dumazet [Thu, 19 Jul 2012 23:02:34 +0000 (23:02 +0000)]
tcp: use hash_32() in tcp_metrics

Fix a missing roundup_pow_of_two(), since tcpmhash_entries is not
guaranteed to be a power of two.

Uses hash_32() instead of custom hash.

tcpmhash_entries should be an unsigned int.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agotcp: Return bool instead of int where appropriate
Vijay Subramanian [Thu, 19 Jul 2012 21:32:18 +0000 (21:32 +0000)]
tcp: Return bool instead of int where appropriate

Applied to a set of static inline functions in tcp_input.c

Signed-off-by: Vijay Subramanian <subramanian.vijay@gmail.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoixgbe: use PCI_VENDOR_ID_INTEL
Jon Mason [Thu, 19 Jul 2012 21:02:09 +0000 (21:02 +0000)]
ixgbe: use PCI_VENDOR_ID_INTEL

Use PCI_VENDOR_ID_INTEL from pci_ids.h instead of creating its own
vendor ID #define.

Signed-off-by: Jon Mason <jdmason@kudzu.us>
Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Cc: Jesse Brandeburg <jesse.brandeburg@intel.com>
Cc: Bruce Allan <bruce.w.allan@intel.com>
Cc: Carolyn Wyborny <carolyn.wyborny@intel.com>
Cc: Don Skidmore <donald.c.skidmore@intel.com>
Cc: Greg Rose <gregory.v.rose@intel.com>
Cc: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
Cc: Alex Duyck <alexander.h.duyck@intel.com>
Cc: John Ronciak <john.ronciak@intel.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoixgb: use PCI_VENDOR_ID_INTEL
Jon Mason [Thu, 19 Jul 2012 21:02:08 +0000 (21:02 +0000)]
ixgb: use PCI_VENDOR_ID_INTEL

Use PCI_VENDOR_ID_INTEL from pci_ids.h instead of creating its own
vendor ID #define.

Signed-off-by: Jon Mason <jdmason@kudzu.us>
Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Cc: Jesse Brandeburg <jesse.brandeburg@intel.com>
Cc: Bruce Allan <bruce.w.allan@intel.com>
Cc: Carolyn Wyborny <carolyn.wyborny@intel.com>
Cc: Don Skidmore <donald.c.skidmore@intel.com>
Cc: Greg Rose <gregory.v.rose@intel.com>
Cc: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
Cc: Alex Duyck <alexander.h.duyck@intel.com>
Cc: John Ronciak <john.ronciak@intel.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agomyri10ge: update MAINTAINERS
Jon Mason [Thu, 19 Jul 2012 20:11:13 +0000 (20:11 +0000)]
myri10ge: update MAINTAINERS

Remove myself from myri10ge MAINTAINERS list

Signed-off-by: Jon Mason <jdmason@kudzu.us>
Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoMerge branch 'for-davem' of git://gitorious.org/linux-can/linux-can-next
David S. Miller [Fri, 20 Jul 2012 17:56:03 +0000 (10:56 -0700)]
Merge branch 'for-davem' of git://gitorious.org/linux-can/linux-can-next

Marc Kleine-Budde says:

====================
the fifth pull request for upcoming v3.6 net-next cleans up and
improves the janz-ican3 driver (6 patches by Ira W. Snyder, one by me).
A patch by Steffen Trumtrar adds imx53 support to the flexcan driver.
And another patch by me, which marks the bit timing constant in the CAN
drivers as "const".
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
12 years agoMerge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/linville/wirel...
John W. Linville [Fri, 20 Jul 2012 16:30:48 +0000 (12:30 -0400)]
Merge branch 'master' of git://git./linux/kernel/git/linville/wireless-next into for-davem

12 years agocan: janz-ican3: add support for one shot mode
Ira W. Snyder [Wed, 18 Jul 2012 22:33:18 +0000 (15:33 -0700)]
can: janz-ican3: add support for one shot mode

The Janz VMOD-ICAN3 hardware has support for one shot packet
transmission. This means that a packet will be attempted to be sent
once, with no automatic retries.

The SocketCAN core has a controller-wide setting for this mode:
CAN_CTRLMODE_ONE_SHOT. The Janz VMOD-ICAN3 hardware supports this flag
on a per-packet level, but the SocketCAN core does not.

Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
12 years agocan: janz-ican3: avoid firmware lockup caused by infinite bus error quota
Ira W. Snyder [Wed, 18 Jul 2012 22:33:17 +0000 (15:33 -0700)]
can: janz-ican3: avoid firmware lockup caused by infinite bus error quota

If the bus error quota is set to infinite and the host CPU cannot keep
up, the Janz VMOD-ICAN3 firmware will stop responding to control
messages until the controller is reset.

The firmware will automatically stop sending bus error messages when the
quota is reached, and will only resume sending bus error messages when
the quota is re-set to a positive value.

This limitation is worked around by setting the bus error quota to one
message, and then re-setting the quota to one message every time a bus
error message is received. By doing this, the firmware never stops
responding to control messages. The CAN bus can be reset without a
hard-reset of the controller card.

Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
12 years agocan: janz-ican3: fix support for CAN_RAW_RECV_OWN_MSGS
Ira W. Snyder [Thu, 19 Jul 2012 15:54:42 +0000 (08:54 -0700)]
can: janz-ican3: fix support for CAN_RAW_RECV_OWN_MSGS

The Janz VMOD-ICAN3 firmware does not support any sort of TX-done
notification or interrupt. The driver previously used the hardware
loopback to attempt to work around this deficiency, but this caused all
sockets to receive all messages, even if CAN_RAW_RECV_OWN_MSGS is off.

Using the new function ican3_cmp_echo_skb(), we can drop the loopback
messages and return the original skbs. This fixes the issues with
CAN_RAW_RECV_OWN_MSGS.

A private skb queue is used to store the echo skbs. This avoids the need
for any index management.

Due to a lack of TX-error interrupts, bus errors are permanently
enabled, and are used as a TX-error notification. This is used to drop
an echo skb when transmission fails. Bus error packets are not generated
if the user has not enabled bus error reporting.

Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
12 years agocan: janz-ican3: fix error and byte counters
Ira W. Snyder [Thu, 19 Jul 2012 15:54:18 +0000 (08:54 -0700)]
can: janz-ican3: fix error and byte counters

The error and byte counter statistics were being incremented
incorrectly. For example, a TX error would be counted both in tx_errors
and rx_errors.

This corrects the problem so that tx_errors and rx_errors are only
incremented for errors caused by packets sent to the bus. Error packets
generated by the driver are not counted.

The byte counters are only increased for packets which are actually
transmitted or received from the bus. Error packets generated by the
driver are not counted.

Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
12 years agocan: janz-ican3: cleanup of ican3_to_can_frame and can_frame_to_ican3
Marc Kleine-Budde [Thu, 21 Oct 2010 16:39:26 +0000 (18:39 +0200)]
can: janz-ican3: cleanup of ican3_to_can_frame and can_frame_to_ican3

This patch cleans up the ICAN3 to Linux CAN frame and vice versa
conversion functions:

- RX: Use get_can_dlc() to limit the dlc value.
- RX+TX: Don't copy the whole frame, only copy the amount of bytes
  specified in cf->can_dlc.

Acked-by: Ira W. Snyder <iws@ovro.caltech.edu>
Tested-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
12 years agocan: janz-ican3: drop invalid skbs
Ira W. Snyder [Wed, 18 Jul 2012 22:33:14 +0000 (15:33 -0700)]
can: janz-ican3: drop invalid skbs

The commit which added the janz-ican3 driver and commit
3ccd4c61 "can: Unify droping of invalid tx skbs and netdev stats" were
committed into mainline Linux during the same merge window.

Therefore, the addition of this code to the janz-ican3 driver was
forgotten. This patch adds the expected code.

Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
12 years agocan: janz-ican3: remove dead code
Ira W. Snyder [Wed, 18 Jul 2012 22:33:13 +0000 (15:33 -0700)]
can: janz-ican3: remove dead code

The code which used this variable was removed during review, before the
driver was added to mainline Linux. It is now dead code, and can be
removed.

Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
12 years agocan: flexcan: add 2nd clock to support imx53 and newer
Steffen Trumtrar [Tue, 17 Jul 2012 14:14:34 +0000 (16:14 +0200)]
can: flexcan: add 2nd clock to support imx53 and newer

This patch adds support for a second clock to the flexcan driver. On
modern freescale ARM cores like the imx53 and imx6q two clocks ("ipg"
and "per") must be enabled in order to access the CAN core.

In the original driver, the clock was requested without specifying the
connection id, further all mainline ARM archs with flexcan support
(imx28, imx25, imx35) register their flexcan clock without a
connection id, too.

This patch first renames the existing clk variable to clk_ipg and
converts it to devm for easier error handling. The connection id "ipg"
is added to the devm_clk_get() call. Then a second clock "per" is
requested. As all archs don't specify a connection id, both clk_get
return the same clock. This ensures compatibility to existing flexcan
support and adds support for imx53 at the same time.

After this patch hits mainline, the archs may give their existing
flexcan clock the "ipg" connection id and implement a dummy "per"
clock.

This patch has been tested on imx28 (unmodified clk tree) and on imx53
with a seperate "ipg" and "per" clock.

Cc: Sascha Hauer <s.hauer@pengutronix.de>
Cc: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>
Acked-by: Hui Wang <jason77.wang@gmail.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>