summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2016-04-21Merge branch 'geneve-vxlan-deps'David S. Miller
Hannes Frederic Sowa says: ==================== net: network drivers should not depend on geneve/vxlan This patchset removes the dependency of network drivers on vxlan or geneve, so those don't get autoloaded when the nic driver is loaded. Also audited the code such that vxlan_get_rx_port and geneve_get_rx_port are not called without rtnl lock. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21geneve: break dependency with netdev driversHannes Frederic Sowa
Equivalent to "vxlan: break dependency with netdev drivers", don't autoload geneve module in case the driver is loaded. Instead make the coupling weaker by using netdevice notifiers as proxy. Cc: Jesse Gross <jesse@kernel.org> Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21vxlan: break dependency with netdev driversHannes Frederic Sowa
Currently all drivers depend and autoload the vxlan module because how vxlan_get_rx_port is linked into them. Remove this dependency: By using a new event type in the netdevice notifier call chain we proxy the request from the drivers to flush and resetup the vxlan ports not directly via function call but by the already existing netdevice notifier call chain. I added a separate new event type, NETDEV_OFFLOAD_PUSH_VXLAN, to do so. We don't need to save those ids, as the event type field is an unsigned long and using specialized event types for this purpose seemed to be a more elegant way. This also comes in beneficial if in future we want to add offloading knobs for vxlan. Cc: Jesse Gross <jesse@kernel.org> Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21qlcnic: protect qlicnic_attach_func with rtnl_lockHannes Frederic Sowa
qlcnic_attach_func requires rtnl_lock to be held. Cc: Dept-GELinuxNICDev@qlogic.com Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21ixgbe: protect vxlan_get_rx_port in ixgbe_service_task with rtnl_lockHannes Frederic Sowa
vxlan_get_rx_port requires rtnl_lock to be held. Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Cc: Jesse Brandeburg <jesse.brandeburg@intel.com> Cc: Shannon Nelson <shannon.nelson@intel.com> Cc: Carolyn Wyborny <carolyn.wyborny@intel.com> Cc: Don Skidmore <donald.c.skidmore@intel.com> Cc: Bruce Allan <bruce.w.allan@intel.com> Cc: John Ronciak <john.ronciak@intel.com> Cc: Mitch Williams <mitch.a.williams@intel.com> Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21mlx4: protect mlx4_en_start_port in mlx4_en_restart with rtnl_lockHannes Frederic Sowa
mlx4_en_start_port requires rtnl_lock to be held. Cc: Eugenia Emantayev <eugenia@mellanox.com> Cc: Yishai Hadas <yishaih@mellanox.com> Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21fm10k: protect fm10k_open in fm10k_io_resume with rtnl_lockHannes Frederic Sowa
fm10k_open requires rtnl_lock to be held. Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Cc: Jesse Brandeburg <jesse.brandeburg@intel.com> Cc: Shannon Nelson <shannon.nelson@intel.com> Cc: Carolyn Wyborny <carolyn.wyborny@intel.com> Cc: Don Skidmore <donald.c.skidmore@intel.com> Cc: Bruce Allan <bruce.w.allan@intel.com> Cc: John Ronciak <john.ronciak@intel.com> Cc: Mitch Williams <mitch.a.williams@intel.com> Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21benet: be_resume needs to protect be_open with rtnl_lockHannes Frederic Sowa
be_open calls down to functions which expects rtnl lock to be held. Cc: Sathya Perla <sathya.perla@broadcom.com> Cc: Ajit Khaparde <ajit.khaparde@broadcom.com> Cc: Padmanabh Ratnakar <padmanabh.ratnakar@broadcom.com> Cc: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com> Cc: Somnath Kotur <somnath.kotur@broadcom.com> Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21net: Add support for IP ID mangling TSO in cases that require encapsulationAlexander Duyck
This patch adds support for NETIF_F_TSO_MANGLEID if a given tunnel supports NETIF_F_TSO. This way if needed a device can then later enable the TSO with IP ID mangling and the tunnels on top of that device can then also make use of the IP ID mangling as well. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21Merge branch 'mlx5-next'David S. Miller
Saeed Mahameed says: ==================== Mellanox 100G mlx5 driver receive path optimizations Changes from V2: - Rebased to 46e7b8d8d53b ("net: dsa: kill circular reference with slave priv") - Updated: ("net/mlx5e: Support RX multi-packet WQE (Striding RQ)") * Per Eric Dumazet comment we changed the driver memory handling scheme to work with order-0 pages rather than order-5 via split_page(). * This means that now a mlx5e rx skb can hold one or (more in case of HW LRO) skb frag each pointing to a 4K order-0 page rather than one frag with order-5 page. - Updated: ("net/mlx5e: Add fragmented memory support for RX multi packet WQE") * Code refactoring and code reuse due the split_page() mechanism, now the MPWQE and fragmented MPWQE handling almost look the same, and share most of the code. - In some cases we see 2%-3% packet rate degradation in comparison to the order-5 pages approach, due to split_page() cpu consumption, but still we do see 3%-10% improvement in comparison to the current linear SKB approach. - We do believe that now the driver memory scheme is significantly less vulnerable to the memory DOS attack Eric pointed at. Changes from V1: - Rebased to efde611b0afa ("Merge branch 'nfp-next'") - Dropped: ("net/mlx5: Refactor mlx5_core_mr to mkey") Already merged into 4.6 from rdma tree. - Dropped: ("net/mlx5_core: Add ConnectX-5 to list of supported devices") Will be pushed to net as we want it in 4.6 release. - Dropped: ("net/mlx5e: Change RX moderation period to be based on CQE") Will be pushed in a later series with full software based adaptive moderation. - Added: ("net/mlx5e: Delay skb->data access") Small trivial optimization. - Updated: ("net/mlx5e: Support RX multi-packet WQE (Striding RQ)") Changed Striding RQ defaults to: > NUM WQEs = 16 > Strides Per WQE = 1024 > Stride Size = 128 - Updated: ("net/mlx5e: Use napi_alloc_skb for RX SKB allocations") Consider the IP packet alignment already done in napi_alloc_skb. Changes from V0: - Fixed a typo in commit message reported by Sergei - Align SKB fragments truesize to stride size - Use skb_add_rx_frag and remove the use of SKB_TRUESIZE - Fix: # MTTs alignment on Power PC - Fix: Free original (unaligned) pointer of MTT array - Use dev_alloc_pages and dev_alloc_page - Extend the stats.buff_alloc_err counter - Reform the copying of packet header into skb linear data - Add compiler hints for conditional statements - Prefetch skd->data prior to copying packet header into it - Rework: mlx5e_complete_rx_fragmented_mpwqe - Handle SKB fragments before linear data - Dropped ("net/mlx5e: Prefetch next RX CQE") for now - Added a small patch that Adds ConnectX-5 devices to the list of supported devices - Rebased to 1cdba5505555 ("Merge git://git.kernel.org/pub/scm/linux/kernel/git/pablo/nf-next") This series includes Some RX modifications and optimizations for the mlx5 Ethernet driver. From Rana, we have one patch that adds the support for Connectx-4 queue counters. From Tariq, several patches that are centralized around improving RX path message rate, CPU and Memory utilization, in each patch commit message you will find the performance improvements numbers related to that specific patch. In the 2nd patch we used a queue counter to report "out of buffer" dropped packet count, "Dropped packets due to lack of software resources" 3rd patch modifies the driver's to RSS default value to be spread along the close NUMA node cores only for better out of the box experience. In the 4th and 5th patches we utilized the use of RX multi-packet WQE (Striding RQ) for better memory utilization especially in case of hardware LRO is enabled and for better message rate for small packets. In the 6th and 7th patches we added a fallback mechanism to use fragmented memory when allocating large WQE strides fails, using UMR (User Memory Registration) and ICO (Internal Control Operations) SQs. In the 8th to 11th patches we did some small modification which show some small extra improvements. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21net/mlx5e: Add ethtool counter for RX buffer allocation failuresTariq Toukan
Counts the number of RX buffer allocation failures and shows it in ethtool statistics. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21net/mlx5e: Delay skb->data accessSaeed Mahameed
Move mlx5e_handle_csum and eth_type_trans to the end of mlx5e_build_rx_skb to gain some more time before accessing skb->data, to reduce cache misses. Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21net/mlx5e: Remove redundant barrierTariq Toukan
The bit-op operation one line before is an explicit barrier by itself. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21net/mlx5e: Use napi_alloc_skb for RX SKB allocationsTariq Toukan
Instead of netdev_alloc_skb, we use the napi_alloc_skb function which is designated to allocate skbuff's for RX in a channel-specific NAPI instance, and implies the IP packet alignment. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21net/mlx5e: Add fragmented memory support for RX multi packet WQETariq Toukan
If the allocation of a linear (physically continuous) MPWQE fails, we allocate a fragmented MPWQE. This is implemented via device's UMR (User Memory Registration) which allows to register multiple memory fragments into ConnectX hardware as a continuous buffer. UMR registration is an asynchronous operation and is done via ICO SQs. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21net/mlx5e: Added ICO SQsTariq Toukan
Added ICO (Internal Control Operations) SQ per channel to be used for driver internal operations such as memory registration for fragmented memory and nop requests upon ifconfig up. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21net/mlx5e: Support RX multi-packet WQE (Striding RQ)Tariq Toukan
Introduce the feature of multi-packet WQE (RX Work Queue Element) referred to as (MPWQE or Striding RQ), in which WQEs are larger and serve multiple packets each. Every WQE consists of many strides of the same size, every received packet is aligned to a beginning of a stride and is written to consecutive strides within a WQE. In the regular approach, each regular WQE is big enough to be capable of serving one received packet of any size up to MTU or 64K in case of device LRO is enabled, making it very wasteful when dealing with small packets or device LRO is enabled. For its flexibility, MPWQE allows a better memory utilization (implying improvements in CPU utilization and packet rate) as packets consume strides according to their size, preserving the rest of the WQE to be available for other packets. MPWQE default configuration: Num of WQEs = 16 Strides Per WQE = 2048 Stride Size = 64 byte The default WQEs memory footprint went from 1024*mtu (~1.5MB) to 16 * 2048 * 64 = 2MB per ring. However, HW LRO can now be supported at no additional cost in memory footprint, and hence we turn it on by default and get an even better performance. Performance tested on ConnectX4-Lx 50G. To isolate the feature under test, the numbers below were measured with HW LRO turned off. We verified that the performance just improves when LRO is turned back on. * Netperf single TCP stream: - BW raised by 10-15% for representative packet sizes: default, 64B, 1024B, 1478B, 65536B. * Netperf multi TCP stream: - No degradation, line rate reached. * Pktgen: packet rate raised by 2-10% for traffic of different message sizes: 64B, 128B, 256B, 1024B, and 1500B. * Pktgen: packet loss in bursts of small messages (64byte), single stream: - | num packets | packets loss before | packets loss after | 2K | ~ 1K | 0 | 8K | ~ 6K | 0 | 16K | ~13K | 0 | 32K | ~28K | 0 | 64K | ~57K | ~24K As expected as the driver can receive as many small packets (<=64B) as the number of total strides in the ring (default = 2048 * 16) vs. 1024 (default ring size regardless of packets size) before this feature. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Achiad Shochat <achiad@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21net/mlx5e: Use function pointers for RX data path handlingTariq Toukan
In preparation for Striding RQ feature, which will need its own RX handlers. This patch does not change any functionality. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Achiad Shochat <achiad@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21net/mlx5e: Use only close NUMA node for default RSSTariq Toukan
Distribute default RSS table uniformly over the rings of the close NUMA node, instead of all available channels. This way we enforce the preference of close rings over far ones. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21net/mlx5e: Allocate set of queue counters per netdevRana Shahout
Connect all netdev RQs to this set of queue counters. Also, add an "rx_out_of_buffer" counter to ethtool, which indicates RX packet drops due to lack of receive buffers. Signed-off-by: Rana Shahout <ranas@mellanox.com> Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21net/mlx5: Introduce device queue countersTariq Toukan
A queue counter can collect several statistics for one or more hardware queues (QPs, RQs, etc ..) that the counter is attached to. For Ethernet it will provide an "out of buffer" counter which collects the number of all packets that are dropped due to lack of software buffers. Here we add device commands to alloc/query/dealloc queue counters. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Rana Shahout <ranas@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21Merge branch 'bcmsysport-napi-updates'David S. Miller
Florian Fainelli says: ==================== net: bcmsysport: utilize newer NAPI APIs These two patches are very analoguous to what was already submitted for BCMGENET and switch the SYSTEMPORT driver to utilizing __napi_schedule_irqoff() and napi_complete_done for the RX NAPI context. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21net: bcmsysport: use napi_complete_done()Florian Fainelli
By using napi_complete_done(), we allow fine tuning of /sys/class/net/ethX/gro_flush_timeout for higher GRO aggregation efficiency for a Gbit NIC. Check commit 24d2e4a50737 ("tg3: use napi_complete_done()") for details. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21net: bcmsysport: use __napi_schedule_irqoff()Florian Fainelli
Both bcm_sysport_tx_isr() and bcm_sysport_rx_isr() run in hard irq context, we do not need to block irq again. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21Merge branch 'nlattr_align'David S. Miller
Nicolas Dichtel says: ==================== libnl: enhance API to ease 64bit alignment for attribute Here is a proposal to add more helpers in the libnetlink to manage 64-bit alignment issues. Note that this series was only tested on x86 by tweeking CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS and adding some traces. The first patch adds helpers for 64bit alignment and other patches use them. We could also add helpers for nla_put_u64() and its variants if needed. v1 -> v2: - remove patch #1 - split patch #2 (now #1 and #2) - add nla_need_padding_for_64bit() ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21ip6mr: align RTA_MFC_STATS on 64-bitNicolas Dichtel
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21ipmr: align RTA_MFC_STATS on 64-bitNicolas Dichtel
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21rtnl: use the new API to align IFLA_STATS*Nicolas Dichtel
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21libnl: add more helpers to align attributes on 64-bitNicolas Dichtel
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21veth: Update features to include all tunnel GSO typesAlexander Duyck
This patch adds support for the checksum enabled versions of UDP and GRE tunnels. With this change we should be able to send and receive GSO frames of these types over the veth pair without needing to segment the packets. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21netdev_features: Fold NETIF_F_ALL_TSO into NETIF_F_GSO_SOFTWAREAlexander Duyck
This patch folds NETIF_F_ALL_TSO into the bitmask for NETIF_F_GSO_SOFTWARE. The idea is to avoid duplication of defines since the only difference between the two was the GSO_UDP bit. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21geneve: testing the wrong variable in geneve6_build_skb()Dan Carpenter
We intended to test "err" and not "skb". Fixes: aed069df099c ('ip_tunnel_core: iptunnel_handle_offloads returns int and doesn't free skb') Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Acked-by: Alexander Duyck <aduyck@mirantis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21NLA_BINARY misuse bug in HSRPeter Heise
Removed .type field from NLA to do proper length checking. Reported by Daniel Borkmann and Julia Lawall. Signed-off-by: Peter Heise <peter.heise@airbus.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21net: use jiffies_to_msecs to replace EXPIRES_IN_MS in inet/sctp_diagXin Long
EXPIRES_IN_MS macro comes from net/ipv4/inet_diag.c and dates back to before jiffies_to_msecs() has been introduced. Now we can remove it and use jiffies_to_msecs(). Suggested-by: Jakub Sitnicki <jkbs@redhat.com> Signed-off-by: Xin Long <lucien.xin@gmail.com> Acked-by: Jakub Sitnicki <jkbs@redhat.com> Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21perf, bpf: minimize the size of perf_trace_() tracepoint handlerAlexei Starovoitov
move trace_call_bpf() into helper function to minimize the size of perf_trace_*() tracepoint handlers. text data bss dec hex filename 10541679 5526646 2945024 19013349 1221ee5 vmlinux_before 10509422 5526646 2945024 18981092 121a0e4 vmlinux_after It may seem that perf_fetch_caller_regs() can also be moved, but that is incorrect, since ip/sp will be wrong. bpf+tracepoint performance is not affected, since perf_swevent_put_recursion_context() is now inlined. export_symbol_gpl can also be dropped. No measurable change in normal perf tracepoints. Suggested-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21net: dsa: remove tag_protocol from dsa_switchVivien Didelot
Having the tag protocol in dsa_switch_driver for setup time and in dsa_switch_tree for runtime is enough. Remove dsa_switch's one. Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21Merge branch '100GbE' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue Jeff Kirsher says: ==================== 100GbE Intel Wired LAN Driver Updates 2016-04-20 This series contains updates to fm10k only. Jacob provides majority of the changes in this series, starting with the addition of helper functions to reduce code duplication and the amount of code indentation. Fixed the use or should we say abuse of the ethtool stats API, which could result in corrupt memory or misleading statistic output. Added the appropriate rtnl_lock() and rtnl_unlock() to avoid RCU warnings during AER events. Come to find out, the PTP/1588 support is not working with the current version of switch management software and possibly never worked, so just remove support for PTP/1588 for now. Fixed how error responses from the switch manager after a LPORT_MAP request are handled, originally which were silently being ignored. Fixed up code documentation to hopefully ease the code and comment comprehension. Fixed a possible NULL pointer dereference after a kcalloc(), where when writing a new default redirection table, and we needed to populate a new RSS table using ethtool_rxfh_indir_default(). We populate this table into a region of memory allocated using kcalloc() but never check it for NULL. Alex adds support for bulk transmit cleanup for fm10k, like he did for all of our other drivers. Ngai-Mint fixes a number of issues with the unicast and multicast address syncs. Where an issue would occur when the netdev is pre-configured to either multicast mode and is enabled for the first time. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2016-04-21fm10k: fix incorrect IPv6 extended header checksumJacob Keller
Check for and handle IPv6 extended headers so that Tx checksum offload can be done. Also use skb_checksum_help for unexpected cases. This was originally discovered in ixgbe. Reported-by: Mark Rustad <mark.d.rustad@intel.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Krishneil Singh <Krishneil.k.singh@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-21fm10k: consistently use Intel(R) for driver namesJacob Keller
Update every header file and other locations to consistently use Intel(R) instead of just Intel. Also update copyright year of files which we modified. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Krishneil Singh <Krishneil.k.singh@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-21fm10k: fix possible null pointer deref after kcallocJacob Keller
When writing a new default redirection table, we needed to populate a new RSS table using ethtool_rxfh_indir_default. We populated this table into a region of memory allocated using kcalloc, but never checked this for NULL. Fix this by moving the default table generation into fm10k_write_reta. If this function is passed a table, use it. Otherwise, generate the default table using ethtool_rxfh_indir_default, 4 at at time. Fixes: 0ea7fae44094 ("fm10k: use ethtool_rxfh_indir_default for default redirection table") Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-21fm10k: Reset multicast mode when deleting lportNgai-Mint Kwan
Deleting lport when multicast mode is configured to FM10K_XCAST_MODE_ALLMULTI or FM10K_XCAST_MODE_PROMISC will result in generating orphaned multicast-group entries in the switch manager. Before deleting the lport, reset multicast mode to FM10K_XCAST_MODE_NONE to flush out these multicast-group entries. Signed-off-by: Ngai-Mint Kwan <ngai-mint.kwan@intel.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Krishneil Singh <Krishneil.k.singh@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-21fm10k: update comment regarding reserved bits checkJacob Keller
The original comment may be read incorrectly as referring to checking the *entire* length is zero. However, it merely checks only the reserved bits of both length and reserved in a small amount of code. Update the comment to indicate this is a clever trick and clearly spell out that it only checks the reserve bits. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Krishneil Singh <Krishneil.k.singh@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-21fm10k: use different name than FM10K_VLAN_CLEAR for override bitJacob Keller
Use a new #define FM10K_VLAN_OVERRIDE even though we're using the exact same bit. The reason for this is clarity in the code, otherwise you can read FM10K_VLAN_CLEAR and think it should be removed. Also add a comment explaining why the FM10K_VLAN_OVERRIDE bit is set. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Krishneil Singh <Krishneil.k.singh@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-21fm10k: use 8bit notation instead of 10bit notation for diagramJacob Keller
The diagram represents bit layout of the multi-bit VLAN update message format. Typically these diagrams are drawn using some power of 2 as the base, to more easily grasp where fields split. Although the numbers above can make it somewhat easy to understand which bit you're looking at, it makes the break points not line up. Re-draw the numbers using base 8, and mark the bit values every 8 bits at the top. This should make it more easy to grasp the table quickly. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Krishneil Singh <Krishneil.k.singh@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-21fm10k: fix documentation of fm10k_tlv_parse_attrJacob Keller
fm10k_tlv_parse_attr is supposed to return FM10K_NOT_IMPLEMENTED for any TLV who's attribute id lies outside the range of results. It does not do this today. In addition, the documentation does not indicate that other attributes which are not implemented for a given TLV will be silently ignored. Fix this. Clean up the logic so that we don't rely on the fact that FM10K_NOT_IMPLEMENTED is greater than zero, as this can easily cause confusion. A future extension could look into some way of reporting unknown TLVs in order to make issues more easily discoverable. We can't just return FM10K_NOT_IMPLEMENTED here because we don't want to drop the entire message if it has an unknown TLV. While here, update the copyright year. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Krishneil Singh <Krishneil.k.singh@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-21fm10k: do not disable PCI device in fm10k_io_error_detectedJacob Keller
fm10k_io_error_detected() does not need to call pci_disable_device(). In the cases where the reset needs to occur, the stack flow will result in calling fm10k_remove() which already disables the PCI device. If we leave the pci_disable_device(), we result in a warning about disabling an already disabled device. Many PCI drivers do call pci_disable_device() in their .error_detected() routines, but it does not appear to be required. In addition, these drivers have a check "is_pci_enabled()" call in their remove routines, which is how they chose to handle the duplicate device disable. This seems incorrect, since the PCI device structure is reference counted. It is very possible that the reference count for the PCI device could be greater than 1. In this case, you would remove the PCI device within the error_detected routine, reducing count to 1, then remove it again in the remove function, reducing it to zero. This would result in yet another disable somewhere else failing. Thus, we shouldn't be using is_pci_enabled() to check for this issue. Instead, just remove the extraneous pci_device_disable() found within the error_detected routine. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Krishneil Singh <Krishneil.k.singh@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-21fm10k: correctly handle LPORT_MAP errorJacob Keller
Currently, any error responses from the switch manager after an LPORT_MAP request are silently ignored. At most the mailbox message will be reported as an error. This can result in unexpected behavior when the switch manager has configured a port with zero bandwidth. Add support for reading the fm10k_swapi_error structure from LPORT_MAP responses. If the message contains the necessary TLV and has a non-zero error code, report link down, clear the dglort_map, and delay the next get_host_state call by a reasonable delay. Also log an error message indicating that the LPORT_MAP request failed. The delay ensures preventing an interrupt storm on the switch manager, and reduces the number of mailbox messages we send in this scenario drastically. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Krishneil Singh <Krishneil.k.singh@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-21fm10k: Fix multicast mode sync issuesNgai-Mint Kwan
Multicast mode checking is no longer a requirement to perform unicast and multicast address syncs. Specifically, a device operating in promiscuous and/or all multicast mode is not excluded. The issue occurs when the netdev is pre-configured to either multicast mode and is enabled for the first time. The multicast-group table in the Switch Manager will be missing obvious multicast entries associated to this netdev. Changes were also made to disallow unicast and multicast syncs with VLAN 0. The Switch Manager considers VLAN 0 to be an invalid entry. Requests with VLAN 0 by the netdev are only generated when the driver is freshly installed and the default VID is not set. Signed-off-by: Ngai-Mint Kwan <ngai-mint.kwan@intel.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Krishneil Singh <Krishneil.k.singh@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-21fm10k: drop 1588 supportJacob Keller
The 1588 support within fm10k does not work correctly with the current version of the switch management software, and likely never worked correctly to begin with. Remove support for PTP/1588. Update copyright year for all these files while we're touching them. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Krishneil Singh <Krishneil.k.singh@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-04-21fm10k: prevent RCU issues during AER eventsJacob Keller
During an AER action response, we were calling fm10k_close without holding the rtnl_lock() which could lead to possible RCU warnings being produced due to 64bit stat updates among other causes. Similarly, we need rtnl_lock() around fm10k_open during fm10k_io_resume. Follow the same pattern elsewhere in the driver and protect the entire open/close sequence. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Krishneil Singh <Krishneil.k.singh@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>