summaryrefslogtreecommitdiff
path: root/net
AgeCommit message (Collapse)Author
2010-03-23net_sched: make traffic control network namespace awareTom Goff
Mostly minor changes to add a net argument to various functions and remove initial network namespace checks. Make /proc/net/psched per network namespace. Signed-off-by: Tom Goff <thomas.goff@boeing.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-23Merge branch 'master' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-next-2.6 Conflicts: Documentation/feature-removal-schedule.txt drivers/net/wireless/ath/ath5k/phy.c
2010-03-23rps: Fix build with CONFIG_SYSFS enabledTom Herbert
Fix build with CONFIG_SYSFS not enabled. Signed-off-by: Tom Herbert <therbert@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-22bridge: cleanup: remove unused assignmentDan Carpenter
We never actually use iph again so this assignment can be removed. Signed-off-by: Dan Carpenter <error27@gmail.com> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-22pktgen node allocationRobert Olsson
Here is patch to manipulate packet node allocation and implicitly how packets are DMA'd etc. The flag NODE_ALLOC enables the function and numa_node_id(); when enabled it can also be explicitly controlled via a new node parameter Tested this with 10 Intel 82599 ports w. TYAN S7025 E5520 CPU's. Was able to TX/DMA ~80 Gbit/s to Ethernet wires. Signed-off-by: Robert Olsson <robert.olsson@its.uu.se> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-22net: dev_getfirstbyhwtype() optimizationEric Dumazet
Use RCU to avoid RTNL use in dev_getfirstbyhwtype() Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-22net: snmp mib cleanupEric Dumazet
There is no point to align or pad mibs to cache lines, they are per cpu allocated with a 8 bytes alignment anyway. This wastes space for no gain. This patch removes __SNMP_MIB_ALIGN__ Since SNMP mibs contain "unsigned long" fields only, we can relax the allocation alignment from "unsigned long long" to "unsigned long" Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-22atm: Use kasprintfEric Dumazet
Use kasprintf in atm_proc_dev_register() Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-22net: speedup netdev_set_master()Eric Dumazet
We currently force a synchronize_net() in netdev_set_master() This seems necessary only when a slave had a master and we dismantle it. In the other case ("ifenslave bond0 ethO"), we dont need this long delay. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-22tcp: Add SNMP counter for DEFER_ACCEPTEric Dumazet
Its currently hard to diagnose when ACK frames are dropped because an application set TCP_DEFER_ACCEPT on its listening socket. See http://bugzilla.kernel.org/show_bug.cgi?id=15507 This patch adds a SNMP value, named TCPDeferAcceptDrop netstat -s | grep TCPDeferAcceptDrop TCPDeferAcceptDrop: 0 This counter is incremented every time we drop a pure ACK frame received by a socket in SYN_RECV state because its SYNACK retrans count is lower than defer_accept value. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-22bonding: flush unicast and multicast lists when changing typeJiri Pirko
After the type change, addresses in unicast and multicast lists wouldn't make sense, not to mention possible different lenghts. So flush both lists here. Note "dev_addr_discard" will be very soon replaced by "dev_mc_flush" (once mc_list conversion will be done). Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-22net: rtnetlink: ignore NETDEV_PRE_TYPE_CHANGE in rtnetlink_event()Patrick McHardy
Ignore the new NETDEV_PRE_TYPE_CHANGE event in rtnetlink_event() since there have been no changes userspace needs to be notified of. Also add a comment to the netdev notifier event definitions to remind people to update the exclusion list when adding new event types. Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-20ipv6: Fix bug in ipv6_chk_same_addr().David S. Miller
hlist_for_each_entry(p...) will not necessarily initialize 'p' to anything if the hlist is empty. GCC notices this and emits a warning. Just return true explicitly when we hit a match, and return false is we fall out of the loop without one. Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-20ipv6: Reduce timer events for addrconf_verify().YOSHIFUJI Hideaki
This patch reduces timer events while keeping accuracy by rounding our timer and/or batching several address validations in addrconf_verify(). addrconf_verify() is called at earliest timeout among interface addresses' timeouts, but at maximum ADDR_CHECK_FREQUENCY (120 secs). In most cases, all of timeouts of interface addresses are long enough (e.g. several hours or days vs 2 minutes), this timer is usually called every ADDR_CHECK_FREQUENCY, and it is okay to be lazy. (Note this timer could be eliminated if all code paths which modifies variables related to timeouts call us manually, but it is another story.) However, in other least but important cases, we try keeping accuracy. When the real interface address timeout is coming, and the timeout is just before the rounded timeout, we accept some error. When a timeout has been reached, we also try batching other several events in very near future. Signed-off-by: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-20IPv6: addrconf cleanup addrconf_verifystephen hemminger
The variable regen_advance is only used in the privacy case. Move it to simplify code and eliminate ifdef's Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-20addrconf: checkpatch fixesStephen Hemminger
Fix some of the checkpatch complaints. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-20IPv6: addrconf cleanupsStephen Hemminger
Some minor stuff, reformat comments and add whitespace for clarity Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-20ipv6: convert idev_list to list macrosstephen hemminger
Convert to list macro's for the list of addresses per interface in IPv6. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-20ipv6: user better hash for addrconfstephen hemminger
The existing hash function has a couple of issues: * it is hardwired to 16 for IN6_ADDR_HSIZE * limited to 256 and callers using int * use jhash2 rather than some old BSD algorithm No need for random seed since this is local only (based on assigned addresses) table. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-20IPv6: convert addrconf hash list to RCUstephen hemminger
Convert from reader/writer lock to RCU and spinlock for addrconf hash list. Adds an additional helper macro for hlist_for_each_entry_continue_rcu to handle the continue case. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-20ipv6: convert addrconf list to hliststephen hemminger
Using hash list macros, simplifies code and helps later RCU. This patch includes some initialization that is not strictly necessary, since an empty hlist node/list is all zero; and list is in BSS and node is allocated with kzalloc. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-20ipv6: convert temporary address list to list macrosstephen hemminger
Use list macros instead of open coded linked list. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-20Merge branch 'master' of ↵David S. Miller
master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6
2010-03-20netfilter: ctnetlink: fix reliable event delivery if message building failsPablo Neira Ayuso
This patch fixes a bug that allows to lose events when reliable event delivery mode is used, ie. if NETLINK_BROADCAST_SEND_ERROR and NETLINK_RECV_NO_ENOBUFS socket options are set. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-20netlink: fix NETLINK_RECV_NO_ENOBUFS in netlink_set_err()Pablo Neira Ayuso
Currently, ENOBUFS errors are reported to the socket via netlink_set_err() even if NETLINK_RECV_NO_ENOBUFS is set. However, that should not happen. This fixes this problem and it changes the prototype of netlink_set_err() to return the number of sockets that have set the NETLINK_RECV_NO_ENOBUFS socket option. This return value is used in the next patch in these bugfix series. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-20NET_DMA: free skbs periodicallySteven J. Magnani
Under NET_DMA, data transfer can grind to a halt when userland issues a large read on a socket with a high RCVLOWAT (i.e., 512 KB for both). This appears to be because the NET_DMA design queues up lots of memcpy operations, but doesn't issue or wait for them (and thus free the associated skbs) until it is time for tcp_recvmesg() to return. The socket hangs when its TCP window goes to zero before enough data is available to satisfy the read. Periodically issue asynchronous memcpy operations, and free skbs for ones that have completed, to prevent sockets from going into zero-window mode. Signed-off-by: Steven J. Magnani <steve@digidescorp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-20tcp: Fix tcp_mark_head_lost() with packets == 0Lennart Schulte
A packet is marked as lost in case packets == 0, although nothing should be done. This results in a too early retransmitted packet during recovery in some cases. This small patch fixes this issue by returning immediately. Signed-off-by: Lennart Schulte <lennart.schulte@nets.rwth-aachen.de> Signed-off-by: Arnd Hannemann <hannemann@nets.rwth-aachen.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-20net: ipmr/ip6mr: fix potential out-of-bounds vif_table accessPatrick McHardy
mfc_parent of cache entries is used to index into the vif_table and is initialised from mfcctl->mfcc_parent. This can take values of to 2^16-1, while the vif_table has only MAXVIFS (32) entries. The same problem affects ip6mr. Refuse invalid values to fix a potential out-of-bounds access. Unlike the other validity checks, this is checked in ipmr_mfc_add() instead of the setsockopt handler since its unused in the delete path and might be uninitialized. Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-20TCP: check min TTL on received ICMP packetsstephen hemminger
This adds RFC5082 checks for TTL on received ICMP packets. It adds some security against spoofed ICMP packets disrupting GTSM protected sessions. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-20ipv6: Remove redundant dst NULL check in ip6_dst_checkHerbert Xu
As the only path leading to ip6_dst_check makes an indirect call through dst->ops, dst cannot be NULL in ip6_dst_check. This patch removes this check in case it misleads people who come across this code. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-20ipv4: check rt_genid in dst_checkTimo Teräs
Xfrm_dst keeps a reference to ipv4 rtable entries on each cached bundle. The only way to renew xfrm_dst when the underlying route has changed, is to implement dst_check for this. This is what ipv6 side does too. The problems started after 87c1e12b5eeb7b30b4b41291bef8e0b41fc3dde9 ("ipsec: Fix bogus bundle flowi") which fixed a bug causing xfrm_dst to not get reused, until that all lookups always generated new xfrm_dst with new route reference and path mtu worked. But after the fix, the old routes started to get reused even after they were expired causing pmtu to break (well it would occationally work if the rtable gc had run recently and marked the route obsolete causing dst_check to get called). Signed-off-by: Timo Teras <timo.teras@iki.fi> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-19rename new rfkill sysfs knobsflorian@mickler.org
This patch renames the (never officially released) sysfs-knobs "blocked_hw" and "blocked_sw" to "hard" and "soft", as the hardware vs software conotation is misleading. It also gets rid of not needed locks around u32-read-access. Signed-off-by: Florian Mickler <florian@mickler.org> Signed-off-by: John W. Linville <linville@tuxdriver.com>
2010-03-19net: Potential null skb->dev dereferenceEric Dumazet
When doing "ifenslave -d bond0 eth0", there is chance to get NULL dereference in netif_receive_skb(), because dev->master suddenly becomes NULL after we tested it. We should use ACCESS_ONCE() to avoid this (or rcu_dereference()) Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-19tcp: Fix OOB POLLIN avoidance.Alexandra Kossovsky
From: Alexandra.Kossovsky@oktetlabs.ru Fixes kernel bugzilla #15541 Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-19net: forbid underlaying devices to change its typeJiri Pirko
It's not desired for underlaying devices to change type. At the time, there is for example possible to have bond with changed type from Ethernet to Infiniband as a port of a bridge. This patch fixes this. Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: Jay Vosburgh <fubar@us.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-19bonding: check return value of nofitier when changing typeJiri Pirko
This patch adds the possibility to refuse the bonding type change for other subsystems (such as for example bridge, vlan, etc.) Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-19net: rename notifier defines for netdev type changeJiri Pirko
Since generally there could be more netdevices changing type other than bonding, making this event type name "bonding-unrelated" Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-19rps: Fixed build with CONFIG_SMP not enabled.Tom Herbert
Signed-off-by: Tom Herbert <therbert@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-17net: convert multiple drivers to use netdev_for_each_mc_addr, part7Jiri Pirko
In mlx4, using char * to store mc address in private structure instead. Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-17tipc: Allow retransmission of cloned buffersNeil Horman
Forward port commit fc477e160af086f6e30c3d4fdf5f5c000d29beb5 from git://tipc.cslab.ericsson.net/pub/git/people/allan/tipc.git Origional commit message: Allow retransmission of cloned buffers This patch fixes an issue with TIPC's message retransmission logic that prevented retransmission of clone sk_buffs. Originally intended as a means of avoiding wasted work in retransmitting messages that were still on the driver's outbound queue, it also prevented TIPC from retransmitting messages through other means -- such as the secondary bearer of the broadcast link, or another interface in a set of bonded interfaces. This fix removes existing checks for cloned sk_buffs that prevented such retransmission. Origionally-Signed-off-by: Allan Stephens <allan.stephens@windriver.com> Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-17tipc: Increase frequency of load distribution over broadcast linkNeil Horman
Forward port commit 29eb572941501c40ac6e62dbc5043bf9ee76ee56 from git://tipc.cslab.ericsson.net/pub/git/people/allan/tipc.git Origional commit message: Increase frequency of load distribution over broadcast link This patch enhances the behavior of TIPC's broadcast link so that it alternates between redundant bearers (if available) after every message sent, rather than after every 10 messages. This change helps to speed up delivery of retransmitted messages by ensuring that they are not sent repeatedly over a bearer that is no longer working, but not yet recognized as failed. Tested by myself in the latest net-2.6 tree using the tipc sanity test suite Origionally-signed-off-by: Allan Stephens <allan.stephens@windriver.com> Signed-off-by: Neil Horman <nhorman@tuxdriver.com> bcast.c | 35 ++++++++++++++--------------------- 1 file changed, 14 insertions(+), 21 deletions(-) Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-17net: core: add IFLA_STATS64 supportJan Engelhardt
`ip -s link` shows interface counters truncated to 32 bit. This is because interface statistics are transported only in 32-bit quantity to userspace. This commit adds a new IFLA_STATS64 attribute that exports them in full 64 bit. References: http://lkml.indiana.edu/hypermail/linux/kernel/0307.3/0215.html Signed-off-by: Jan Engelhardt <jengelh@medozas.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-17net: tcp: make veno selectable as default congestion moduleJan Engelhardt
Signed-off-by: Jan Engelhardt <jengelh@medozas.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-17net: tcp: make hybla selectable as default congestion moduleJan Engelhardt
Signed-off-by: Jan Engelhardt <jengelh@medozas.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-17net: remove rcu locking from fib_rules_event()Eric Dumazet
We hold RTNL at this point and dont use RCU variants of list traversals, we dont need rcu_read_lock()/rcu_read_unlock() Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-17bridge: per-cpu packet statistics (v3)stephen hemminger
The shared packet statistics are a potential source of slow down on bridged traffic. Convert to per-cpu array, but only keep those statistics which change per-packet. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-17rps: Receive Packet SteeringTom Herbert
This patch implements software receive side packet steering (RPS). RPS distributes the load of received packet processing across multiple CPUs. Problem statement: Protocol processing done in the NAPI context for received packets is serialized per device queue and becomes a bottleneck under high packet load. This substantially limits pps that can be achieved on a single queue NIC and provides no scaling with multiple cores. This solution queues packets early on in the receive path on the backlog queues of other CPUs. This allows protocol processing (e.g. IP and TCP) to be performed on packets in parallel. For each device (or each receive queue in a multi-queue device) a mask of CPUs is set to indicate the CPUs that can process packets. A CPU is selected on a per packet basis by hashing contents of the packet header (e.g. the TCP or UDP 4-tuple) and using the result to index into the CPU mask. The IPI mechanism is used to raise networking receive softirqs between CPUs. This effectively emulates in software what a multi-queue NIC can provide, but is generic requiring no device support. Many devices now provide a hash over the 4-tuple on a per packet basis (e.g. the Toeplitz hash). This patch allow drivers to set the HW reported hash in an skb field, and that value in turn is used to index into the RPS maps. Using the HW generated hash can avoid cache misses on the packet when steering it to a remote CPU. The CPU mask is set on a per device and per queue basis in the sysfs variable /sys/class/net/<device>/queues/rx-<n>/rps_cpus. This is a set of canonical bit maps for receive queues in the device (numbered by <n>). If a device does not support multi-queue, a single variable is used for the device (rx-0). Generally, we have found this technique increases pps capabilities of a single queue device with good CPU utilization. Optimal settings for the CPU mask seem to depend on architectures and cache hierarcy. Below are some results running 500 instances of netperf TCP_RR test with 1 byte req. and resp. Results show cumulative transaction rate and system CPU utilization. e1000e on 8 core Intel Without RPS: 108K tps at 33% CPU With RPS: 311K tps at 64% CPU forcedeth on 16 core AMD Without RPS: 156K tps at 15% CPU With RPS: 404K tps at 49% CPU bnx2x on 16 core AMD Without RPS 567K tps at 61% CPU (4 HW RX queues) Without RPS 738K tps at 96% CPU (8 HW RX queues) With RPS: 854K tps at 76% CPU (4 HW RX queues) Caveats: - The benefits of this patch are dependent on architecture and cache hierarchy. Tuning the masks to get best performance is probably necessary. - This patch adds overhead in the path for processing a single packet. In a lightly loaded server this overhead may eliminate the advantages of increased parallelism, and possibly cause some relative performance degradation. We have found that masks that are cache aware (share same caches with the interrupting CPU) mitigate much of this. - The RPS masks can be changed dynamically, however whenever the mask is changed this introduces the possibility of generating out of order packets. It's probably best not change the masks too frequently. Signed-off-by: Tom Herbert <therbert@google.com> include/linux/netdevice.h | 32 ++++- include/linux/skbuff.h | 3 + net/core/dev.c | 335 +++++++++++++++++++++++++++++++++++++-------- net/core/net-sysfs.c | 225 ++++++++++++++++++++++++++++++- net/core/skbuff.c | 2 + 5 files changed, 538 insertions(+), 59 deletions(-) Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-17RDS: Enable per-cpu workqueue threadsTina Yang
Create per-cpu workqueue threads instead of a single krdsd thread. This is a step towards better scalability. Signed-off-by: Andy Grover <andy.grover@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-17RDS: Do not call set_page_dirty() with irqs offAndy Grover
set_page_dirty() unconditionally re-enables interrupts, so if we call it with irqs off, they will be on after the call, and that's bad. This patch moves the call after we've re-enabled interrupts in send_drop_to(), so it's safe. Also, add BUG_ONs to let us know if we ever do call set_page_dirty with interrupts off. Signed-off-by: Andy Grover <andy.grover@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-03-17RDS: Properly unmap when getting a remote access errorSherman Pun
If the RDMA op has aborted with a remote access error, in addition to what we already do (tell userspace it has completed with an error) also unmap it and put() the rm. Otherwise, hangs may occur on arches that track maps and will not exit without proper cleanup. Signed-off-by: Andy Grover <andy.grover@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>