Merge 4.14.34 into android-4.14
authorGreg Kroah-Hartman <gregkh@google.com>
Thu, 12 Apr 2018 12:51:09 +0000 (14:51 +0200)
committerGreg Kroah-Hartman <gregkh@google.com>
Thu, 12 Apr 2018 12:51:09 +0000 (14:51 +0200)
Changes in 4.14.34
i40iw: Fix sequence number for the first partial FPDU
i40iw: Correct Q1/XF object count equation
i40iw: Validate correct IRD/ORD connection parameters
clk: meson: mpll: use 64-bit maths in params_from_rate
ARM: dts: ls1021a: add "fsl,ls1021a-esdhc" compatible string to esdhc node
Bluetooth: Add a new 04ca:3015 QCA_ROME device
ipv6: Reinject IPv6 packets if IPsec policy matches after SNAT
thermal: power_allocator: fix one race condition issue for thermal_instances list
perf probe: Find versioned symbols from map
perf probe: Add warning message if there is unexpected event name
perf evsel: Enable ignore_missing_thread for pid option
net: hns3: free the ring_data structrue when change tqps
net: hns3: fix for getting auto-negotiation state in hclge_get_autoneg
l2tp: fix missing print session offset info
rds; Reset rs->rs_bound_addr in rds_add_bound() failure path
ACPI / video: Default lcd_only to true on Win8-ready and newer machines
net/mlx4_en: Change default QoS settings
VFS: close race between getcwd() and d_move()
watchdog: dw_wdt: add stop watchdog operation
clk: divider: fix incorrect usage of container_of
PM / devfreq: Fix potential NULL pointer dereference in governor_store
selftests/net: fix bugs in address and port initialization
RDMA/cma: Mark end of CMA ID messages
hwmon: (ina2xx) Make calibration register value fixed
clk: sunxi-ng: a83t: Add M divider to TCON1 clock
media: videobuf2-core: don't go out of the buffer range
ASoC: Intel: Skylake: Disable clock gating during firmware and library download
ASoC: Intel: cht_bsw_rt5645: Analog Mic support
spi: sh-msiof: Fix timeout failures for TX-only DMA transfers
scsi: libiscsi: Allow sd_shutdown on bad transport
scsi: mpt3sas: Proper handling of set/clear of "ATA command pending" flag.
irqchip/gic-v3: Fix the driver probe() fail due to disabled GICC entry
ACPI: EC: Fix debugfs_create_*() usage
mac80211: Fix setting TX power on monitor interfaces
vfb: fix video mode and line_length being set when loaded
gpio: label descriptors using the device name
powernv-cpufreq: Add helper to extract pstate from PMSR
IB/rdmavt: Allocate CQ memory on the correct node
blk-mq: avoid to map CPU into stale hw queue
blk-mq: fix race between updating nr_hw_queues and switching io sched
backlight: tdo24m: Fix the SPI CS between transfers
pinctrl: baytrail: Enable glitch filter for GPIOs used as interrupts
nvme_fcloop: disassocate local port structs
nvme_fcloop: fix abort race condition
tpm: return a TPM_RC_COMMAND_CODE response if command is not implemented
perf report: Fix a no annotate browser displayed issue
staging: lustre: disable preempt while sampling processor id.
ASoC: Intel: sst: Fix the return value of 'sst_send_byte_stream_mrfld()'
power: supply: axp288_charger: Properly stop work on probe-error / remove
rt2x00: do not pause queue unconditionally on error path
wl1251: check return from call to wl1251_acx_arp_ip_filter
net/mlx5: Fix race for multiple RoCE enable
net: hns3: Fix an error of total drop packet statistics
net: hns3: Fix a loop index error of tqp statistics query
net: hns3: Fix an error macro definition of HNS3_TQP_STAT
net: hns3: fix for changing MTU
bcache: ret IOERR when read meets metadata error
bcache: stop writeback thread after detaching
bcache: segregate flash only volume write streams
scsi: libsas: fix memory leak in sas_smp_get_phy_events()
scsi: libsas: fix error when getting phy events
scsi: libsas: initialize sas_phy status according to response of DISCOVER
blk-mq: fix kernel oops in blk_mq_tag_idle()
tty: n_gsm: Allow ADM response in addition to UA for control dlci
block, bfq: put async queues for root bfq groups too
EDAC, mv64x60: Fix an error handling path
uio_hv_generic: check that host supports monitor page
i40evf: don't rely on netif_running() outside rtnl_lock()
cxgb4vf: Fix SGE FL buffer initialization logic for 64K pages
scsi: megaraid_sas: Error handling for invalid ldcount provided by firmware in RAID map
scsi: megaraid_sas: unload flag should be set after scsi_remove_host is called
RDMA/cma: Fix rdma_cm path querying for RoCE
gpio: thunderx: fix error return code in thunderx_gpio_probe()
x86/gart: Exclude GART aperture from vmcore
sdhci: Advertise 2.0v supply on SDIO host controller
ibmvnic: Don't handle RX interrupts when not up.
Input: goodix - disable IRQs while suspended
mtd: mtd_oobtest: Handle bitflips during reads
crypto: aes-generic - build with -Os on gcc-7+
perf tools: Fix copyfile_offset update of output offset
tcmu: release blocks for partially setup cmds
thermal: int3400_thermal: fix error handling in int3400_thermal_probe()
objtool: Add Clang support
crypto: arm64/aes-ce-cipher - move assembler code to .S file
x86/microcode: Propagate return value from updating functions
x86/CPU: Add a microcode loader callback
x86/CPU: Check CPU feature bits after microcode upgrade
x86/microcode: Get rid of struct apply_microcode_ctx
x86/microcode/intel: Check microcode revision before updating sibling threads
x86/microcode/intel: Writeback and invalidate caches before updating microcode
x86/microcode: Do not upload microcode if CPUs are offline
x86/microcode/intel: Look into the patch cache first
x86/microcode: Request microcode on the BSP
x86/microcode: Synchronize late microcode loading
x86/microcode: Attempt late loading only when new microcode is present
x86/microcode: Fix CPU synchronization routine
arp: fix arp_filter on l3slave devices
ipv6: the entire IPv6 header chain must fit the first fragment
lan78xx: Crash in lan78xx_writ_reg (Workqueue: events lan78xx_deferred_multicast_write)
net: fix possible out-of-bound read in skb_network_protocol()
net/ipv6: Fix route leaking between VRFs
net/ipv6: Increment OUTxxx counters after netfilter hook
netlink: make sure nladdr has correct size in netlink_connect()
net sched actions: fix dumping which requires several messages to user space
net/sched: fix NULL dereference in the error path of tcf_bpf_init()
pptp: remove a buggy dst release in pptp_connect()
r8169: fix setting driver_data after register_netdev
sctp: do not leak kernel memory to user space
sctp: sctp_sockaddr_af must check minimal addr length for AF_INET6
sky2: Increase D3 delay to sky2 stops working after suspend
vhost: correctly remove wait queue during poll failure
vlan: also check phy_driver ts_info for vlan's real device
vrf: Fix use after free and double free in vrf_finish_output
bonding: fix the err path for dev hwaddr sync in bond_enslave
bonding: move dev_mc_sync after master_upper_dev_link in bond_enslave
bonding: process the err returned by dev_set_allmulti properly in bond_enslave
net: fool proof dev_valid_name()
ip_tunnel: better validate user provided tunnel names
ipv6: sit: better validate user provided tunnel names
ip6_gre: better validate user provided tunnel names
ip6_tunnel: better validate user provided tunnel names
vti6: better validate user provided tunnel names
net/mlx5e: Avoid using the ipv6 stub in the TC offload neigh update path
net/mlx5e: Fix memory usage issues in offloading TC flows
nfp: use full 40 bits of the NSP buffer address
ipv6: sr: fix seg6 encap performances with TSO enabled
net/mlx5e: Don't override vport admin link state in switchdev mode
net/mlx5e: Sync netdev vxlan ports at open
net/sched: fix NULL dereference in the error path of tunnel_key_init()
net/sched: fix NULL dereference on the error path of tcf_skbmod_init()
strparser: Fix sign of err codes
net/mlx4_en: Fix mixed PFC and Global pause user control requests
net/mlx5e: Fix traffic being dropped on VF representor
vhost: validate log when IOTLB is enabled
route: check sysctl_fib_multipath_use_neigh earlier than hash
team: move dev_mc_sync after master_upper_dev_link in team_port_add
vhost_net: add missing lock nesting notation
net/mlx4_core: Fix memory leak while delete slave's resources
Linux 4.14.34

Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
144 files changed:
Makefile
arch/arm/boot/dts/ls1021a.dtsi
arch/arm64/crypto/Makefile
arch/arm64/crypto/aes-ce-cipher.c [deleted file]
arch/arm64/crypto/aes-ce-core.S [new file with mode: 0644]
arch/arm64/crypto/aes-ce-glue.c [new file with mode: 0644]
arch/x86/include/asm/microcode.h
arch/x86/include/asm/processor.h
arch/x86/kernel/aperture_64.c
arch/x86/kernel/cpu/common.c
arch/x86/kernel/cpu/microcode/amd.c
arch/x86/kernel/cpu/microcode/core.c
arch/x86/kernel/cpu/microcode/intel.c
arch/x86/xen/mmu_hvm.c
block/bfq-cgroup.c
block/blk-mq.c
crypto/Makefile
drivers/acpi/acpi_video.c
drivers/acpi/ec.c
drivers/acpi/ec_sys.c
drivers/acpi/internal.h
drivers/bluetooth/btusb.c
drivers/char/tpm/tpm-interface.c
drivers/char/tpm/tpm.h
drivers/clk/clk-divider.c
drivers/clk/hisilicon/clkdivider-hi6220.c
drivers/clk/meson/clk-mpll.c
drivers/clk/nxp/clk-lpc32xx.c
drivers/clk/qcom/clk-regmap-divider.c
drivers/clk/sunxi-ng/ccu-sun8i-a83t.c
drivers/clk/sunxi-ng/ccu_div.c
drivers/cpufreq/powernv-cpufreq.c
drivers/devfreq/devfreq.c
drivers/edac/mv64x60_edac.c
drivers/gpio/gpio-thunderx.c
drivers/gpio/gpiolib.c
drivers/gpu/drm/msm/dsi/pll/dsi_pll_14nm.c
drivers/hwmon/ina2xx.c
drivers/infiniband/core/cma.c
drivers/infiniband/core/ucma.c
drivers/infiniband/hw/i40iw/i40iw_cm.c
drivers/infiniband/hw/i40iw/i40iw_ctrl.c
drivers/infiniband/hw/i40iw/i40iw_d.h
drivers/infiniband/hw/i40iw/i40iw_puda.c
drivers/infiniband/sw/rdmavt/cq.c
drivers/input/touchscreen/goodix.c
drivers/irqchip/irq-gic-v3.c
drivers/md/bcache/alloc.c
drivers/md/bcache/request.c
drivers/md/bcache/super.c
drivers/media/v4l2-core/videobuf2-core.c
drivers/mmc/host/sdhci-pci-core.c
drivers/mmc/host/sdhci.c
drivers/mtd/tests/oobtest.c
drivers/net/bonding/bond_main.c
drivers/net/ethernet/chelsio/cxgb4vf/sge.c
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_enet.c
drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_ethtool.c
drivers/net/ethernet/ibm/ibmvnic.c
drivers/net/ethernet/intel/i40evf/i40evf_main.c
drivers/net/ethernet/marvell/sky2.c
drivers/net/ethernet/mellanox/mlx4/en_dcb_nl.c
drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
drivers/net/ethernet/mellanox/mlx4/en_main.c
drivers/net/ethernet/mellanox/mlx4/en_netdev.c
drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
drivers/net/ethernet/mellanox/mlx5/core/vport.c
drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.c
drivers/net/ethernet/realtek/r8169.c
drivers/net/ppp/pptp.c
drivers/net/team/team.c
drivers/net/usb/lan78xx.c
drivers/net/vrf.c
drivers/net/wireless/ralink/rt2x00/rt2x00mac.c
drivers/net/wireless/ti/wl1251/main.c
drivers/nvme/target/fcloop.c
drivers/pinctrl/intel/pinctrl-baytrail.c
drivers/power/supply/axp288_charger.c
drivers/rtc/rtc-ac100.c
drivers/scsi/libiscsi.c
drivers/scsi/libsas/sas_expander.c
drivers/scsi/megaraid/megaraid_sas_base.c
drivers/scsi/megaraid/megaraid_sas_fp.c
drivers/scsi/mpt3sas/mpt3sas_scsih.c
drivers/spi/spi-sh-msiof.c
drivers/staging/lustre/lnet/libcfs/linux/linux-cpu.c
drivers/target/target_core_user.c
drivers/thermal/int340x_thermal/int3400_thermal.c
drivers/thermal/power_allocator.c
drivers/tty/n_gsm.c
drivers/uio/uio_hv_generic.c
drivers/vhost/net.c
drivers/vhost/vhost.c
drivers/video/backlight/corgi_lcd.c
drivers/video/backlight/tdo24m.c
drivers/video/backlight/tosa_lcd.c
drivers/video/fbdev/vfb.c
drivers/watchdog/dw_wdt.c
fs/dcache.c
include/linux/clk-provider.h
include/linux/mlx5/driver.h
net/8021q/vlan_dev.c
net/core/dev.c
net/ipv4/arp.c
net/ipv4/fib_semantics.c
net/ipv4/ip_tunnel.c
net/ipv6/ip6_gre.c
net/ipv6/ip6_output.c
net/ipv6/ip6_tunnel.c
net/ipv6/ip6_vti.c
net/ipv6/route.c
net/ipv6/seg6_iptunnel.c
net/ipv6/sit.c
net/l2tp/l2tp_netlink.c
net/mac80211/cfg.c
net/mac80211/driver-ops.h
net/netlink/af_netlink.c
net/rds/bind.c
net/sched/act_api.c
net/sched/act_bpf.c
net/sched/act_skbmod.c
net/sched/act_tunnel_key.c
net/sctp/ipv6.c
net/sctp/socket.c
net/strparser/strparser.c
sound/soc/intel/atom/sst/sst_stream.c
sound/soc/intel/boards/cht_bsw_rt5645.c
sound/soc/intel/skylake/skl-messages.c
sound/soc/intel/skylake/skl-pcm.c
tools/objtool/check.c
tools/perf/arch/powerpc/util/sym-handling.c
tools/perf/builtin-record.c
tools/perf/builtin-report.c
tools/perf/util/evsel.c
tools/perf/util/probe-event.c
tools/perf/util/symbol.c
tools/perf/util/symbol.h
tools/perf/util/util.c
tools/testing/selftests/net/msg_zerocopy.c

index 577062dc37040dea7d2bf65aaecaa35957a672d5..0c0236524acc3ebb0e25398d4870934eae69959b 100644 (file)
--- a/Makefile
+++ b/Makefile
@@ -1,7 +1,7 @@
 # SPDX-License-Identifier: GPL-2.0
 VERSION = 4
 PATCHLEVEL = 14
-SUBLEVEL = 33
+SUBLEVEL = 34
 EXTRAVERSION =
 NAME = Petit Gorille
 
index 9319e1f0f1d8f8360ad11513b178bca7cc71d9fe..379b4a03cfe2f7b92b2c3bd630d204be12a337d9 100644 (file)
                };
 
                esdhc: esdhc@1560000 {
-                       compatible = "fsl,esdhc";
+                       compatible = "fsl,ls1021a-esdhc", "fsl,esdhc";
                        reg = <0x0 0x1560000 0x0 0x10000>;
                        interrupts = <GIC_SPI 94 IRQ_TYPE_LEVEL_HIGH>;
                        clock-frequency = <0>;
index fc69150ec0a389e1135c9f5d257b89dd44eda2fe..e761c0a7a181ceac4095b8764716470dc4f5cd65 100644 (file)
@@ -24,7 +24,7 @@ obj-$(CONFIG_CRYPTO_CRC32_ARM64_CE) += crc32-ce.o
 crc32-ce-y:= crc32-ce-core.o crc32-ce-glue.o
 
 obj-$(CONFIG_CRYPTO_AES_ARM64_CE) += aes-ce-cipher.o
-CFLAGS_aes-ce-cipher.o += -march=armv8-a+crypto
+aes-ce-cipher-y := aes-ce-core.o aes-ce-glue.o
 
 obj-$(CONFIG_CRYPTO_AES_ARM64_CE_CCM) += aes-ce-ccm.o
 aes-ce-ccm-y := aes-ce-ccm-glue.o aes-ce-ccm-core.o
diff --git a/arch/arm64/crypto/aes-ce-cipher.c b/arch/arm64/crypto/aes-ce-cipher.c
deleted file mode 100644 (file)
index 6a75cd7..0000000
+++ /dev/null
@@ -1,281 +0,0 @@
-/*
- * aes-ce-cipher.c - core AES cipher using ARMv8 Crypto Extensions
- *
- * Copyright (C) 2013 - 2017 Linaro Ltd <ard.biesheuvel@linaro.org>
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- */
-
-#include <asm/neon.h>
-#include <asm/simd.h>
-#include <asm/unaligned.h>
-#include <crypto/aes.h>
-#include <linux/cpufeature.h>
-#include <linux/crypto.h>
-#include <linux/module.h>
-
-#include "aes-ce-setkey.h"
-
-MODULE_DESCRIPTION("Synchronous AES cipher using ARMv8 Crypto Extensions");
-MODULE_AUTHOR("Ard Biesheuvel <ard.biesheuvel@linaro.org>");
-MODULE_LICENSE("GPL v2");
-
-asmlinkage void __aes_arm64_encrypt(u32 *rk, u8 *out, const u8 *in, int rounds);
-asmlinkage void __aes_arm64_decrypt(u32 *rk, u8 *out, const u8 *in, int rounds);
-
-struct aes_block {
-       u8 b[AES_BLOCK_SIZE];
-};
-
-static int num_rounds(struct crypto_aes_ctx *ctx)
-{
-       /*
-        * # of rounds specified by AES:
-        * 128 bit key          10 rounds
-        * 192 bit key          12 rounds
-        * 256 bit key          14 rounds
-        * => n byte key        => 6 + (n/4) rounds
-        */
-       return 6 + ctx->key_length / 4;
-}
-
-static void aes_cipher_encrypt(struct crypto_tfm *tfm, u8 dst[], u8 const src[])
-{
-       struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm);
-       struct aes_block *out = (struct aes_block *)dst;
-       struct aes_block const *in = (struct aes_block *)src;
-       void *dummy0;
-       int dummy1;
-
-       if (!may_use_simd()) {
-               __aes_arm64_encrypt(ctx->key_enc, dst, src, num_rounds(ctx));
-               return;
-       }
-
-       kernel_neon_begin();
-
-       __asm__("       ld1     {v0.16b}, %[in]                 ;"
-               "       ld1     {v1.4s}, [%[key]], #16          ;"
-               "       cmp     %w[rounds], #10                 ;"
-               "       bmi     0f                              ;"
-               "       bne     3f                              ;"
-               "       mov     v3.16b, v1.16b                  ;"
-               "       b       2f                              ;"
-               "0:     mov     v2.16b, v1.16b                  ;"
-               "       ld1     {v3.4s}, [%[key]], #16          ;"
-               "1:     aese    v0.16b, v2.16b                  ;"
-               "       aesmc   v0.16b, v0.16b                  ;"
-               "2:     ld1     {v1.4s}, [%[key]], #16          ;"
-               "       aese    v0.16b, v3.16b                  ;"
-               "       aesmc   v0.16b, v0.16b                  ;"
-               "3:     ld1     {v2.4s}, [%[key]], #16          ;"
-               "       subs    %w[rounds], %w[rounds], #3      ;"
-               "       aese    v0.16b, v1.16b                  ;"
-               "       aesmc   v0.16b, v0.16b                  ;"
-               "       ld1     {v3.4s}, [%[key]], #16          ;"
-               "       bpl     1b                              ;"
-               "       aese    v0.16b, v2.16b                  ;"
-               "       eor     v0.16b, v0.16b, v3.16b          ;"
-               "       st1     {v0.16b}, %[out]                ;"
-
-       :       [out]           "=Q"(*out),
-               [key]           "=r"(dummy0),
-               [rounds]        "=r"(dummy1)
-       :       [in]            "Q"(*in),
-                               "1"(ctx->key_enc),
-                               "2"(num_rounds(ctx) - 2)
-       :       "cc");
-
-       kernel_neon_end();
-}
-
-static void aes_cipher_decrypt(struct crypto_tfm *tfm, u8 dst[], u8 const src[])
-{
-       struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm);
-       struct aes_block *out = (struct aes_block *)dst;
-       struct aes_block const *in = (struct aes_block *)src;
-       void *dummy0;
-       int dummy1;
-
-       if (!may_use_simd()) {
-               __aes_arm64_decrypt(ctx->key_dec, dst, src, num_rounds(ctx));
-               return;
-       }
-
-       kernel_neon_begin();
-
-       __asm__("       ld1     {v0.16b}, %[in]                 ;"
-               "       ld1     {v1.4s}, [%[key]], #16          ;"
-               "       cmp     %w[rounds], #10                 ;"
-               "       bmi     0f                              ;"
-               "       bne     3f                              ;"
-               "       mov     v3.16b, v1.16b                  ;"
-               "       b       2f                              ;"
-               "0:     mov     v2.16b, v1.16b                  ;"
-               "       ld1     {v3.4s}, [%[key]], #16          ;"
-               "1:     aesd    v0.16b, v2.16b                  ;"
-               "       aesimc  v0.16b, v0.16b                  ;"
-               "2:     ld1     {v1.4s}, [%[key]], #16          ;"
-               "       aesd    v0.16b, v3.16b                  ;"
-               "       aesimc  v0.16b, v0.16b                  ;"
-               "3:     ld1     {v2.4s}, [%[key]], #16          ;"
-               "       subs    %w[rounds], %w[rounds], #3      ;"
-               "       aesd    v0.16b, v1.16b                  ;"
-               "       aesimc  v0.16b, v0.16b                  ;"
-               "       ld1     {v3.4s}, [%[key]], #16          ;"
-               "       bpl     1b                              ;"
-               "       aesd    v0.16b, v2.16b                  ;"
-               "       eor     v0.16b, v0.16b, v3.16b          ;"
-               "       st1     {v0.16b}, %[out]                ;"
-
-       :       [out]           "=Q"(*out),
-               [key]           "=r"(dummy0),
-               [rounds]        "=r"(dummy1)
-       :       [in]            "Q"(*in),
-                               "1"(ctx->key_dec),
-                               "2"(num_rounds(ctx) - 2)
-       :       "cc");
-
-       kernel_neon_end();
-}
-
-/*
- * aes_sub() - use the aese instruction to perform the AES sbox substitution
- *             on each byte in 'input'
- */
-static u32 aes_sub(u32 input)
-{
-       u32 ret;
-
-       __asm__("dup    v1.4s, %w[in]           ;"
-               "movi   v0.16b, #0              ;"
-               "aese   v0.16b, v1.16b          ;"
-               "umov   %w[out], v0.4s[0]       ;"
-
-       :       [out]   "=r"(ret)
-       :       [in]    "r"(input)
-       :               "v0","v1");
-
-       return ret;
-}
-
-int ce_aes_expandkey(struct crypto_aes_ctx *ctx, const u8 *in_key,
-                    unsigned int key_len)
-{
-       /*
-        * The AES key schedule round constants
-        */
-       static u8 const rcon[] = {
-               0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80, 0x1b, 0x36,
-       };
-
-       u32 kwords = key_len / sizeof(u32);
-       struct aes_block *key_enc, *key_dec;
-       int i, j;
-
-       if (key_len != AES_KEYSIZE_128 &&
-           key_len != AES_KEYSIZE_192 &&
-           key_len != AES_KEYSIZE_256)
-               return -EINVAL;
-
-       ctx->key_length = key_len;
-       for (i = 0; i < kwords; i++)
-               ctx->key_enc[i] = get_unaligned_le32(in_key + i * sizeof(u32));
-
-       kernel_neon_begin();
-       for (i = 0; i < sizeof(rcon); i++) {
-               u32 *rki = ctx->key_enc + (i * kwords);
-               u32 *rko = rki + kwords;
-
-               rko[0] = ror32(aes_sub(rki[kwords - 1]), 8) ^ rcon[i] ^ rki[0];
-               rko[1] = rko[0] ^ rki[1];
-               rko[2] = rko[1] ^ rki[2];
-               rko[3] = rko[2] ^ rki[3];
-
-               if (key_len == AES_KEYSIZE_192) {
-                       if (i >= 7)
-                               break;
-                       rko[4] = rko[3] ^ rki[4];
-                       rko[5] = rko[4] ^ rki[5];
-               } else if (key_len == AES_KEYSIZE_256) {
-                       if (i >= 6)
-                               break;
-                       rko[4] = aes_sub(rko[3]) ^ rki[4];
-                       rko[5] = rko[4] ^ rki[5];
-                       rko[6] = rko[5] ^ rki[6];
-                       rko[7] = rko[6] ^ rki[7];
-               }
-       }
-
-       /*
-        * Generate the decryption keys for the Equivalent Inverse Cipher.
-        * This involves reversing the order of the round keys, and applying
-        * the Inverse Mix Columns transformation on all but the first and
-        * the last one.
-        */
-       key_enc = (struct aes_block *)ctx->key_enc;
-       key_dec = (struct aes_block *)ctx->key_dec;
-       j = num_rounds(ctx);
-
-       key_dec[0] = key_enc[j];
-       for (i = 1, j--; j > 0; i++, j--)
-               __asm__("ld1    {v0.4s}, %[in]          ;"
-                       "aesimc v1.16b, v0.16b          ;"
-                       "st1    {v1.4s}, %[out] ;"
-
-               :       [out]   "=Q"(key_dec[i])
-               :       [in]    "Q"(key_enc[j])
-               :               "v0","v1");
-       key_dec[i] = key_enc[0];
-
-       kernel_neon_end();
-       return 0;
-}
-EXPORT_SYMBOL(ce_aes_expandkey);
-
-int ce_aes_setkey(struct crypto_tfm *tfm, const u8 *in_key,
-                 unsigned int key_len)
-{
-       struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm);
-       int ret;
-
-       ret = ce_aes_expandkey(ctx, in_key, key_len);
-       if (!ret)
-               return 0;
-
-       tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
-       return -EINVAL;
-}
-EXPORT_SYMBOL(ce_aes_setkey);
-
-static struct crypto_alg aes_alg = {
-       .cra_name               = "aes",
-       .cra_driver_name        = "aes-ce",
-       .cra_priority           = 250,
-       .cra_flags              = CRYPTO_ALG_TYPE_CIPHER,
-       .cra_blocksize          = AES_BLOCK_SIZE,
-       .cra_ctxsize            = sizeof(struct crypto_aes_ctx),
-       .cra_module             = THIS_MODULE,
-       .cra_cipher = {
-               .cia_min_keysize        = AES_MIN_KEY_SIZE,
-               .cia_max_keysize        = AES_MAX_KEY_SIZE,
-               .cia_setkey             = ce_aes_setkey,
-               .cia_encrypt            = aes_cipher_encrypt,
-               .cia_decrypt            = aes_cipher_decrypt
-       }
-};
-
-static int __init aes_mod_init(void)
-{
-       return crypto_register_alg(&aes_alg);
-}
-
-static void __exit aes_mod_exit(void)
-{
-       crypto_unregister_alg(&aes_alg);
-}
-
-module_cpu_feature_match(AES, aes_mod_init);
-module_exit(aes_mod_exit);
diff --git a/arch/arm64/crypto/aes-ce-core.S b/arch/arm64/crypto/aes-ce-core.S
new file mode 100644 (file)
index 0000000..8efdfda
--- /dev/null
@@ -0,0 +1,87 @@
+/*
+ * Copyright (C) 2013 - 2017 Linaro Ltd <ard.biesheuvel@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/linkage.h>
+#include <asm/assembler.h>
+
+       .arch           armv8-a+crypto
+
+ENTRY(__aes_ce_encrypt)
+       sub             w3, w3, #2
+       ld1             {v0.16b}, [x2]
+       ld1             {v1.4s}, [x0], #16
+       cmp             w3, #10
+       bmi             0f
+       bne             3f
+       mov             v3.16b, v1.16b
+       b               2f
+0:     mov             v2.16b, v1.16b
+       ld1             {v3.4s}, [x0], #16
+1:     aese            v0.16b, v2.16b
+       aesmc           v0.16b, v0.16b
+2:     ld1             {v1.4s}, [x0], #16
+       aese            v0.16b, v3.16b
+       aesmc           v0.16b, v0.16b
+3:     ld1             {v2.4s}, [x0], #16
+       subs            w3, w3, #3
+       aese            v0.16b, v1.16b
+       aesmc           v0.16b, v0.16b
+       ld1             {v3.4s}, [x0], #16
+       bpl             1b
+       aese            v0.16b, v2.16b
+       eor             v0.16b, v0.16b, v3.16b
+       st1             {v0.16b}, [x1]
+       ret
+ENDPROC(__aes_ce_encrypt)
+
+ENTRY(__aes_ce_decrypt)
+       sub             w3, w3, #2
+       ld1             {v0.16b}, [x2]
+       ld1             {v1.4s}, [x0], #16
+       cmp             w3, #10
+       bmi             0f
+       bne             3f
+       mov             v3.16b, v1.16b
+       b               2f
+0:     mov             v2.16b, v1.16b
+       ld1             {v3.4s}, [x0], #16
+1:     aesd            v0.16b, v2.16b
+       aesimc          v0.16b, v0.16b
+2:     ld1             {v1.4s}, [x0], #16
+       aesd            v0.16b, v3.16b
+       aesimc          v0.16b, v0.16b
+3:     ld1             {v2.4s}, [x0], #16
+       subs            w3, w3, #3
+       aesd            v0.16b, v1.16b
+       aesimc          v0.16b, v0.16b
+       ld1             {v3.4s}, [x0], #16
+       bpl             1b
+       aesd            v0.16b, v2.16b
+       eor             v0.16b, v0.16b, v3.16b
+       st1             {v0.16b}, [x1]
+       ret
+ENDPROC(__aes_ce_decrypt)
+
+/*
+ * __aes_ce_sub() - use the aese instruction to perform the AES sbox
+ *                  substitution on each byte in 'input'
+ */
+ENTRY(__aes_ce_sub)
+       dup             v1.4s, w0
+       movi            v0.16b, #0
+       aese            v0.16b, v1.16b
+       umov            w0, v0.s[0]
+       ret
+ENDPROC(__aes_ce_sub)
+
+ENTRY(__aes_ce_invert)
+       ld1             {v0.4s}, [x1]
+       aesimc          v1.16b, v0.16b
+       st1             {v1.4s}, [x0]
+       ret
+ENDPROC(__aes_ce_invert)
diff --git a/arch/arm64/crypto/aes-ce-glue.c b/arch/arm64/crypto/aes-ce-glue.c
new file mode 100644 (file)
index 0000000..e6b3227
--- /dev/null
@@ -0,0 +1,190 @@
+/*
+ * aes-ce-cipher.c - core AES cipher using ARMv8 Crypto Extensions
+ *
+ * Copyright (C) 2013 - 2017 Linaro Ltd <ard.biesheuvel@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <asm/neon.h>
+#include <asm/simd.h>
+#include <asm/unaligned.h>
+#include <crypto/aes.h>
+#include <linux/cpufeature.h>
+#include <linux/crypto.h>
+#include <linux/module.h>
+
+#include "aes-ce-setkey.h"
+
+MODULE_DESCRIPTION("Synchronous AES cipher using ARMv8 Crypto Extensions");
+MODULE_AUTHOR("Ard Biesheuvel <ard.biesheuvel@linaro.org>");
+MODULE_LICENSE("GPL v2");
+
+asmlinkage void __aes_arm64_encrypt(u32 *rk, u8 *out, const u8 *in, int rounds);
+asmlinkage void __aes_arm64_decrypt(u32 *rk, u8 *out, const u8 *in, int rounds);
+
+struct aes_block {
+       u8 b[AES_BLOCK_SIZE];
+};
+
+asmlinkage void __aes_ce_encrypt(u32 *rk, u8 *out, const u8 *in, int rounds);
+asmlinkage void __aes_ce_decrypt(u32 *rk, u8 *out, const u8 *in, int rounds);
+
+asmlinkage u32 __aes_ce_sub(u32 l);
+asmlinkage void __aes_ce_invert(struct aes_block *out,
+                               const struct aes_block *in);
+
+static int num_rounds(struct crypto_aes_ctx *ctx)
+{
+       /*
+        * # of rounds specified by AES:
+        * 128 bit key          10 rounds
+        * 192 bit key          12 rounds
+        * 256 bit key          14 rounds
+        * => n byte key        => 6 + (n/4) rounds
+        */
+       return 6 + ctx->key_length / 4;
+}
+
+static void aes_cipher_encrypt(struct crypto_tfm *tfm, u8 dst[], u8 const src[])
+{
+       struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm);
+
+       if (!may_use_simd()) {
+               __aes_arm64_encrypt(ctx->key_enc, dst, src, num_rounds(ctx));
+               return;
+       }
+
+       kernel_neon_begin();
+       __aes_ce_encrypt(ctx->key_enc, dst, src, num_rounds(ctx));
+       kernel_neon_end();
+}
+
+static void aes_cipher_decrypt(struct crypto_tfm *tfm, u8 dst[], u8 const src[])
+{
+       struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm);
+
+       if (!may_use_simd()) {
+               __aes_arm64_decrypt(ctx->key_dec, dst, src, num_rounds(ctx));
+               return;
+       }
+
+       kernel_neon_begin();
+       __aes_ce_decrypt(ctx->key_dec, dst, src, num_rounds(ctx));
+       kernel_neon_end();
+}
+
+int ce_aes_expandkey(struct crypto_aes_ctx *ctx, const u8 *in_key,
+                    unsigned int key_len)
+{
+       /*
+        * The AES key schedule round constants
+        */
+       static u8 const rcon[] = {
+               0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80, 0x1b, 0x36,
+       };
+
+       u32 kwords = key_len / sizeof(u32);
+       struct aes_block *key_enc, *key_dec;
+       int i, j;
+
+       if (key_len != AES_KEYSIZE_128 &&
+           key_len != AES_KEYSIZE_192 &&
+           key_len != AES_KEYSIZE_256)
+               return -EINVAL;
+
+       ctx->key_length = key_len;
+       for (i = 0; i < kwords; i++)
+               ctx->key_enc[i] = get_unaligned_le32(in_key + i * sizeof(u32));
+
+       kernel_neon_begin();
+       for (i = 0; i < sizeof(rcon); i++) {
+               u32 *rki = ctx->key_enc + (i * kwords);
+               u32 *rko = rki + kwords;
+
+               rko[0] = ror32(__aes_ce_sub(rki[kwords - 1]), 8) ^ rcon[i] ^ rki[0];
+               rko[1] = rko[0] ^ rki[1];
+               rko[2] = rko[1] ^ rki[2];
+               rko[3] = rko[2] ^ rki[3];
+
+               if (key_len == AES_KEYSIZE_192) {
+                       if (i >= 7)
+                               break;
+                       rko[4] = rko[3] ^ rki[4];
+                       rko[5] = rko[4] ^ rki[5];
+               } else if (key_len == AES_KEYSIZE_256) {
+                       if (i >= 6)
+                               break;
+                       rko[4] = __aes_ce_sub(rko[3]) ^ rki[4];
+                       rko[5] = rko[4] ^ rki[5];
+                       rko[6] = rko[5] ^ rki[6];
+                       rko[7] = rko[6] ^ rki[7];
+               }
+       }
+
+       /*
+        * Generate the decryption keys for the Equivalent Inverse Cipher.
+        * This involves reversing the order of the round keys, and applying
+        * the Inverse Mix Columns transformation on all but the first and
+        * the last one.
+        */
+       key_enc = (struct aes_block *)ctx->key_enc;
+       key_dec = (struct aes_block *)ctx->key_dec;
+       j = num_rounds(ctx);
+
+       key_dec[0] = key_enc[j];
+       for (i = 1, j--; j > 0; i++, j--)
+               __aes_ce_invert(key_dec + i, key_enc + j);
+       key_dec[i] = key_enc[0];
+
+       kernel_neon_end();
+       return 0;
+}
+EXPORT_SYMBOL(ce_aes_expandkey);
+
+int ce_aes_setkey(struct crypto_tfm *tfm, const u8 *in_key,
+                 unsigned int key_len)
+{
+       struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm);
+       int ret;
+
+       ret = ce_aes_expandkey(ctx, in_key, key_len);
+       if (!ret)
+               return 0;
+
+       tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
+       return -EINVAL;
+}
+EXPORT_SYMBOL(ce_aes_setkey);
+
+static struct crypto_alg aes_alg = {
+       .cra_name               = "aes",
+       .cra_driver_name        = "aes-ce",
+       .cra_priority           = 250,
+       .cra_flags              = CRYPTO_ALG_TYPE_CIPHER,
+       .cra_blocksize          = AES_BLOCK_SIZE,
+       .cra_ctxsize            = sizeof(struct crypto_aes_ctx),
+       .cra_module             = THIS_MODULE,
+       .cra_cipher = {
+               .cia_min_keysize        = AES_MIN_KEY_SIZE,
+               .cia_max_keysize        = AES_MAX_KEY_SIZE,
+               .cia_setkey             = ce_aes_setkey,
+               .cia_encrypt            = aes_cipher_encrypt,
+               .cia_decrypt            = aes_cipher_decrypt
+       }
+};
+
+static int __init aes_mod_init(void)
+{
+       return crypto_register_alg(&aes_alg);
+}
+
+static void __exit aes_mod_exit(void)
+{
+       crypto_unregister_alg(&aes_alg);
+}
+
+module_cpu_feature_match(AES, aes_mod_init);
+module_exit(aes_mod_exit);
index 55520cec8b27d69727092e6fd81d3a0c0f4db252..6cf0e4cb7b9763a7d4d10438017a73aac737720b 100644 (file)
@@ -37,7 +37,13 @@ struct cpu_signature {
 
 struct device;
 
-enum ucode_state { UCODE_ERROR, UCODE_OK, UCODE_NFOUND };
+enum ucode_state {
+       UCODE_OK        = 0,
+       UCODE_NEW,
+       UCODE_UPDATED,
+       UCODE_NFOUND,
+       UCODE_ERROR,
+};
 
 struct microcode_ops {
        enum ucode_state (*request_microcode_user) (int cpu,
@@ -54,7 +60,7 @@ struct microcode_ops {
         * are being called.
         * See also the "Synchronization" section in microcode_core.c.
         */
-       int (*apply_microcode) (int cpu);
+       enum ucode_state (*apply_microcode) (int cpu);
        int (*collect_cpu_info) (int cpu, struct cpu_signature *csig);
 };
 
index 15fc074bd6281378d1905af1afcaaab02a29d569..3222c7746cb1f857b686dbbdf5d06c475cf6aaad 100644 (file)
@@ -968,4 +968,5 @@ bool xen_set_default_idle(void);
 
 void stop_this_cpu(void *dummy);
 void df_debug(struct pt_regs *regs, long error_code);
+void microcode_check(void);
 #endif /* _ASM_X86_PROCESSOR_H */
index f5d92bc3b8844422628b8e6d741be34d727aa45b..2c4d5ece74565f10330b4121af72f33622f820bc 100644 (file)
@@ -30,6 +30,7 @@
 #include <asm/dma.h>
 #include <asm/amd_nb.h>
 #include <asm/x86_init.h>
+#include <linux/crash_dump.h>
 
 /*
  * Using 512M as goal, in case kexec will load kernel_big
@@ -56,6 +57,33 @@ int fallback_aper_force __initdata;
 
 int fix_aperture __initdata = 1;
 
+#ifdef CONFIG_PROC_VMCORE
+/*
+ * If the first kernel maps the aperture over e820 RAM, the kdump kernel will
+ * use the same range because it will remain configured in the northbridge.
+ * Trying to dump this area via /proc/vmcore may crash the machine, so exclude
+ * it from vmcore.
+ */
+static unsigned long aperture_pfn_start, aperture_page_count;
+
+static int gart_oldmem_pfn_is_ram(unsigned long pfn)
+{
+       return likely((pfn < aperture_pfn_start) ||
+                     (pfn >= aperture_pfn_start + aperture_page_count));
+}
+
+static void exclude_from_vmcore(u64 aper_base, u32 aper_order)
+{
+       aperture_pfn_start = aper_base >> PAGE_SHIFT;
+       aperture_page_count = (32 * 1024 * 1024) << aper_order >> PAGE_SHIFT;
+       WARN_ON(register_oldmem_pfn_is_ram(&gart_oldmem_pfn_is_ram));
+}
+#else
+static void exclude_from_vmcore(u64 aper_base, u32 aper_order)
+{
+}
+#endif
+
 /* This code runs before the PCI subsystem is initialized, so just
    access the northbridge directly. */
 
@@ -435,8 +463,16 @@ int __init gart_iommu_hole_init(void)
 
 out:
        if (!fix && !fallback_aper_force) {
-               if (last_aper_base)
+               if (last_aper_base) {
+                       /*
+                        * If this is the kdump kernel, the first kernel
+                        * may have allocated the range over its e820 RAM
+                        * and fixed up the northbridge
+                        */
+                       exclude_from_vmcore(last_aper_base, last_aper_order);
+
                        return 1;
+               }
                return 0;
        }
 
@@ -473,6 +509,14 @@ out:
                return 0;
        }
 
+       /*
+        * If this is the kdump kernel _and_ the first kernel did not
+        * configure the aperture in the northbridge, this range may
+        * overlap with the first kernel's memory. We can't access the
+        * range through vmcore even though it should be part of the dump.
+        */
+       exclude_from_vmcore(aper_alloc, aper_order);
+
        /* Fix up the north bridges */
        for (i = 0; i < amd_nb_bus_dev_ranges[i].dev_limit; i++) {
                int bus, dev_base, dev_limit;
index 651b7afed4dada9255fdf587fd312acc95d62358..cf6380200dc29f897d1e6317d3445fda909ee95c 100644 (file)
@@ -1724,3 +1724,33 @@ static int __init init_cpu_syscore(void)
        return 0;
 }
 core_initcall(init_cpu_syscore);
+
+/*
+ * The microcode loader calls this upon late microcode load to recheck features,
+ * only when microcode has been updated. Caller holds microcode_mutex and CPU
+ * hotplug lock.
+ */
+void microcode_check(void)
+{
+       struct cpuinfo_x86 info;
+
+       perf_check_microcode();
+
+       /* Reload CPUID max function as it might've changed. */
+       info.cpuid_level = cpuid_eax(0);
+
+       /*
+        * Copy all capability leafs to pick up the synthetic ones so that
+        * memcmp() below doesn't fail on that. The ones coming from CPUID will
+        * get overwritten in get_cpu_cap().
+        */
+       memcpy(&info.x86_capability, &boot_cpu_data.x86_capability, sizeof(info.x86_capability));
+
+       get_cpu_cap(&info);
+
+       if (!memcmp(&info.x86_capability, &boot_cpu_data.x86_capability, sizeof(info.x86_capability)))
+               return;
+
+       pr_warn("x86/CPU: CPU features have changed after loading microcode, but might not take effect.\n");
+       pr_warn("x86/CPU: Please consider either early loading through initrd/built-in or a potential BIOS update.\n");
+}
index 330b8462d426faad0dccdc480f34eec34cd8b92f..48179928ff38cf12476ce27cd096383aee1feb93 100644 (file)
@@ -339,7 +339,7 @@ int __init save_microcode_in_initrd_amd(unsigned int cpuid_1_eax)
                return -EINVAL;
 
        ret = load_microcode_amd(true, x86_family(cpuid_1_eax), desc.data, desc.size);
-       if (ret != UCODE_OK)
+       if (ret > UCODE_UPDATED)
                return -EINVAL;
 
        return 0;
@@ -498,7 +498,7 @@ static unsigned int verify_patch_size(u8 family, u32 patch_size,
        return patch_size;
 }
 
-static int apply_microcode_amd(int cpu)
+static enum ucode_state apply_microcode_amd(int cpu)
 {
        struct cpuinfo_x86 *c = &cpu_data(cpu);
        struct microcode_amd *mc_amd;
@@ -512,7 +512,7 @@ static int apply_microcode_amd(int cpu)
 
        p = find_patch(cpu);
        if (!p)
-               return 0;
+               return UCODE_NFOUND;
 
        mc_amd  = p->data;
        uci->mc = p->data;
@@ -523,13 +523,13 @@ static int apply_microcode_amd(int cpu)
        if (rev >= mc_amd->hdr.patch_id) {
                c->microcode = rev;
                uci->cpu_sig.rev = rev;
-               return 0;
+               return UCODE_OK;
        }
 
        if (__apply_microcode_amd(mc_amd)) {
                pr_err("CPU%d: update failed for patch_level=0x%08x\n",
                        cpu, mc_amd->hdr.patch_id);
-               return -1;
+               return UCODE_ERROR;
        }
        pr_info("CPU%d: new patch_level=0x%08x\n", cpu,
                mc_amd->hdr.patch_id);
@@ -537,7 +537,7 @@ static int apply_microcode_amd(int cpu)
        uci->cpu_sig.rev = mc_amd->hdr.patch_id;
        c->microcode = mc_amd->hdr.patch_id;
 
-       return 0;
+       return UCODE_UPDATED;
 }
 
 static int install_equiv_cpu_table(const u8 *buf)
@@ -683,27 +683,35 @@ static enum ucode_state __load_microcode_amd(u8 family, const u8 *data,
 static enum ucode_state
 load_microcode_amd(bool save, u8 family, const u8 *data, size_t size)
 {
+       struct ucode_patch *p;
        enum ucode_state ret;
 
        /* free old equiv table */
        free_equiv_cpu_table();
 
        ret = __load_microcode_amd(family, data, size);
-
-       if (ret != UCODE_OK)
+       if (ret != UCODE_OK) {
                cleanup();
+               return ret;
+       }
 
-#ifdef CONFIG_X86_32
-       /* save BSP's matching patch for early load */
-       if (save) {
-               struct ucode_patch *p = find_patch(0);
-               if (p) {
-                       memset(amd_ucode_patch, 0, PATCH_MAX_SIZE);
-                       memcpy(amd_ucode_patch, p->data, min_t(u32, ksize(p->data),
-                                                              PATCH_MAX_SIZE));
-               }
+       p = find_patch(0);
+       if (!p) {
+               return ret;
+       } else {
+               if (boot_cpu_data.microcode == p->patch_id)
+                       return ret;
+
+               ret = UCODE_NEW;
        }
-#endif
+
+       /* save BSP's matching patch for early load */
+       if (!save)
+               return ret;
+
+       memset(amd_ucode_patch, 0, PATCH_MAX_SIZE);
+       memcpy(amd_ucode_patch, p->data, min_t(u32, ksize(p->data), PATCH_MAX_SIZE));
+
        return ret;
 }
 
index e4fc595cd6eaa6ee9b51a380e62fa05270c184e1..021c90464cc23eaba8997e93559f863f2afebc62 100644 (file)
 #define pr_fmt(fmt) "microcode: " fmt
 
 #include <linux/platform_device.h>
+#include <linux/stop_machine.h>
 #include <linux/syscore_ops.h>
 #include <linux/miscdevice.h>
 #include <linux/capability.h>
 #include <linux/firmware.h>
 #include <linux/kernel.h>
+#include <linux/delay.h>
 #include <linux/mutex.h>
 #include <linux/cpu.h>
+#include <linux/nmi.h>
 #include <linux/fs.h>
 #include <linux/mm.h>
 
@@ -64,6 +67,11 @@ LIST_HEAD(microcode_cache);
  */
 static DEFINE_MUTEX(microcode_mutex);
 
+/*
+ * Serialize late loading so that CPUs get updated one-by-one.
+ */
+static DEFINE_SPINLOCK(update_lock);
+
 struct ucode_cpu_info          ucode_cpu_info[NR_CPUS];
 
 struct cpu_info_ctx {
@@ -373,26 +381,23 @@ static int collect_cpu_info(int cpu)
        return ret;
 }
 
-struct apply_microcode_ctx {
-       int err;
-};
-
 static void apply_microcode_local(void *arg)
 {
-       struct apply_microcode_ctx *ctx = arg;
+       enum ucode_state *err = arg;
 
-       ctx->err = microcode_ops->apply_microcode(smp_processor_id());
+       *err = microcode_ops->apply_microcode(smp_processor_id());
 }
 
 static int apply_microcode_on_target(int cpu)
 {
-       struct apply_microcode_ctx ctx = { .err = 0 };
+       enum ucode_state err;
        int ret;
 
-       ret = smp_call_function_single(cpu, apply_microcode_local, &ctx, 1);
-       if (!ret)
-               ret = ctx.err;
-
+       ret = smp_call_function_single(cpu, apply_microcode_local, &err, 1);
+       if (!ret) {
+               if (err == UCODE_ERROR)
+                       ret = 1;
+       }
        return ret;
 }
 
@@ -489,31 +494,124 @@ static void __exit microcode_dev_exit(void)
 /* fake device for request_firmware */
 static struct platform_device  *microcode_pdev;
 
-static int reload_for_cpu(int cpu)
+/*
+ * Late loading dance. Why the heavy-handed stomp_machine effort?
+ *
+ * - HT siblings must be idle and not execute other code while the other sibling
+ *   is loading microcode in order to avoid any negative interactions caused by
+ *   the loading.
+ *
+ * - In addition, microcode update on the cores must be serialized until this
+ *   requirement can be relaxed in the future. Right now, this is conservative
+ *   and good.
+ */
+#define SPINUNIT 100 /* 100 nsec */
+
+static int check_online_cpus(void)
 {
-       struct ucode_cpu_info *uci = ucode_cpu_info + cpu;
-       enum ucode_state ustate;
-       int err = 0;
+       if (num_online_cpus() == num_present_cpus())
+               return 0;
 
-       if (!uci->valid)
-               return err;
+       pr_err("Not all CPUs online, aborting microcode update.\n");
 
-       ustate = microcode_ops->request_microcode_fw(cpu, &microcode_pdev->dev, true);
-       if (ustate == UCODE_OK)
-               apply_microcode_on_target(cpu);
-       else
-               if (ustate == UCODE_ERROR)
-                       err = -EINVAL;
-       return err;
+       return -EINVAL;
+}
+
+static atomic_t late_cpus_in;
+static atomic_t late_cpus_out;
+
+static int __wait_for_cpus(atomic_t *t, long long timeout)
+{
+       int all_cpus = num_online_cpus();
+
+       atomic_inc(t);
+
+       while (atomic_read(t) < all_cpus) {
+               if (timeout < SPINUNIT) {
+                       pr_err("Timeout while waiting for CPUs rendezvous, remaining: %d\n",
+                               all_cpus - atomic_read(t));
+                       return 1;
+               }
+
+               ndelay(SPINUNIT);
+               timeout -= SPINUNIT;
+
+               touch_nmi_watchdog();
+       }
+       return 0;
+}
+
+/*
+ * Returns:
+ * < 0 - on error
+ *   0 - no update done
+ *   1 - microcode was updated
+ */
+static int __reload_late(void *info)
+{
+       int cpu = smp_processor_id();
+       enum ucode_state err;
+       int ret = 0;
+
+       /*
+        * Wait for all CPUs to arrive. A load will not be attempted unless all
+        * CPUs show up.
+        * */
+       if (__wait_for_cpus(&late_cpus_in, NSEC_PER_SEC))
+               return -1;
+
+       spin_lock(&update_lock);
+       apply_microcode_local(&err);
+       spin_unlock(&update_lock);
+
+       if (err > UCODE_NFOUND) {
+               pr_warn("Error reloading microcode on CPU %d\n", cpu);
+               return -1;
+       /* siblings return UCODE_OK because their engine got updated already */
+       } else if (err == UCODE_UPDATED || err == UCODE_OK) {
+               ret = 1;
+       } else {
+               return ret;
+       }
+
+       /*
+        * Increase the wait timeout to a safe value here since we're
+        * serializing the microcode update and that could take a while on a
+        * large number of CPUs. And that is fine as the *actual* timeout will
+        * be determined by the last CPU finished updating and thus cut short.
+        */
+       if (__wait_for_cpus(&late_cpus_out, NSEC_PER_SEC * num_online_cpus()))
+               panic("Timeout during microcode update!\n");
+
+       return ret;
+}
+
+/*
+ * Reload microcode late on all CPUs. Wait for a sec until they
+ * all gather together.
+ */
+static int microcode_reload_late(void)
+{
+       int ret;
+
+       atomic_set(&late_cpus_in,  0);
+       atomic_set(&late_cpus_out, 0);
+
+       ret = stop_machine_cpuslocked(__reload_late, NULL, cpu_online_mask);
+       if (ret > 0)
+               microcode_check();
+
+       return ret;
 }
 
 static ssize_t reload_store(struct device *dev,
                            struct device_attribute *attr,
                            const char *buf, size_t size)
 {
+       enum ucode_state tmp_ret = UCODE_OK;
+       int bsp = boot_cpu_data.cpu_index;
        unsigned long val;
-       int cpu;
-       ssize_t ret = 0, tmp_ret;
+       ssize_t ret = 0;
 
        ret = kstrtoul(buf, 0, &val);
        if (ret)
@@ -522,23 +620,24 @@ static ssize_t reload_store(struct device *dev,
        if (val != 1)
                return size;
 
+       tmp_ret = microcode_ops->request_microcode_fw(bsp, &microcode_pdev->dev, true);
+       if (tmp_ret != UCODE_NEW)
+               return size;
+
        get_online_cpus();
-       mutex_lock(&microcode_mutex);
-       for_each_online_cpu(cpu) {
-               tmp_ret = reload_for_cpu(cpu);
-               if (tmp_ret != 0)
-                       pr_warn("Error reloading microcode on CPU %d\n", cpu);
 
-               /* save retval of the first encountered reload error */
-               if (!ret)
-                       ret = tmp_ret;
-       }
-       if (!ret)
-               perf_check_microcode();
+       ret = check_online_cpus();
+       if (ret)
+               goto put;
+
+       mutex_lock(&microcode_mutex);
+       ret = microcode_reload_late();
        mutex_unlock(&microcode_mutex);
+
+put:
        put_online_cpus();
 
-       if (!ret)
+       if (ret >= 0)
                ret = size;
 
        return ret;
@@ -606,10 +705,8 @@ static enum ucode_state microcode_init_cpu(int cpu, bool refresh_fw)
        if (system_state != SYSTEM_RUNNING)
                return UCODE_NFOUND;
 
-       ustate = microcode_ops->request_microcode_fw(cpu, &microcode_pdev->dev,
-                                                    refresh_fw);
-
-       if (ustate == UCODE_OK) {
+       ustate = microcode_ops->request_microcode_fw(cpu, &microcode_pdev->dev, refresh_fw);
+       if (ustate == UCODE_NEW) {
                pr_debug("CPU%d updated upon init\n", cpu);
                apply_microcode_on_target(cpu);
        }
index a15db2b4e0d66a8b5c4d2468359eeafb85401151..32b8e5724f966abbc67153065dd17b5ddcfd6d70 100644 (file)
@@ -589,6 +589,23 @@ static int apply_microcode_early(struct ucode_cpu_info *uci, bool early)
        if (!mc)
                return 0;
 
+       /*
+        * Save us the MSR write below - which is a particular expensive
+        * operation - when the other hyperthread has updated the microcode
+        * already.
+        */
+       rev = intel_get_microcode_revision();
+       if (rev >= mc->hdr.rev) {
+               uci->cpu_sig.rev = rev;
+               return UCODE_OK;
+       }
+
+       /*
+        * Writeback and invalidate caches before updating microcode to avoid
+        * internal issues depending on what the microcode is updating.
+        */
+       native_wbinvd();
+
        /* write microcode via MSR 0x79 */
        native_wrmsrl(MSR_IA32_UCODE_WRITE, (unsigned long)mc->bits);
 
@@ -772,27 +789,44 @@ static int collect_cpu_info(int cpu_num, struct cpu_signature *csig)
        return 0;
 }
 
-static int apply_microcode_intel(int cpu)
+static enum ucode_state apply_microcode_intel(int cpu)
 {
+       struct ucode_cpu_info *uci = ucode_cpu_info + cpu;
+       struct cpuinfo_x86 *c = &cpu_data(cpu);
        struct microcode_intel *mc;
-       struct ucode_cpu_info *uci;
-       struct cpuinfo_x86 *c;
        static int prev_rev;
        u32 rev;
 
        /* We should bind the task to the CPU */
        if (WARN_ON(raw_smp_processor_id() != cpu))
-               return -1;
+               return UCODE_ERROR;
 
-       uci = ucode_cpu_info + cpu;
-       mc = uci->mc;
+       /* Look for a newer patch in our cache: */
+       mc = find_patch(uci);
        if (!mc) {
-               /* Look for a newer patch in our cache: */
-               mc = find_patch(uci);
+               mc = uci->mc;
                if (!mc)
-                       return 0;
+                       return UCODE_NFOUND;
        }
 
+       /*
+        * Save us the MSR write below - which is a particular expensive
+        * operation - when the other hyperthread has updated the microcode
+        * already.
+        */
+       rev = intel_get_microcode_revision();
+       if (rev >= mc->hdr.rev) {
+               uci->cpu_sig.rev = rev;
+               c->microcode = rev;
+               return UCODE_OK;
+       }
+
+       /*
+        * Writeback and invalidate caches before updating microcode to avoid
+        * internal issues depending on what the microcode is updating.
+        */
+       native_wbinvd();
+
        /* write microcode via MSR 0x79 */
        wrmsrl(MSR_IA32_UCODE_WRITE, (unsigned long)mc->bits);
 
@@ -801,7 +835,7 @@ static int apply_microcode_intel(int cpu)
        if (rev != mc->hdr.rev) {
                pr_err("CPU%d update to revision 0x%x failed\n",
                       cpu, mc->hdr.rev);
-               return -1;
+               return UCODE_ERROR;
        }
 
        if (rev != prev_rev) {
@@ -813,12 +847,10 @@ static int apply_microcode_intel(int cpu)
                prev_rev = rev;
        }
 
-       c = &cpu_data(cpu);
-
        uci->cpu_sig.rev = rev;
        c->microcode = rev;
 
-       return 0;
+       return UCODE_UPDATED;
 }
 
 static enum ucode_state generic_load_microcode(int cpu, void *data, size_t size,
@@ -830,6 +862,7 @@ static enum ucode_state generic_load_microcode(int cpu, void *data, size_t size,
        unsigned int leftover = size;
        unsigned int curr_mc_size = 0, new_mc_size = 0;
        unsigned int csig, cpf;
+       enum ucode_state ret = UCODE_OK;
 
        while (leftover) {
                struct microcode_header_intel mc_header;
@@ -871,6 +904,7 @@ static enum ucode_state generic_load_microcode(int cpu, void *data, size_t size,
                        new_mc  = mc;
                        new_mc_size = mc_size;
                        mc = NULL;      /* trigger new vmalloc */
+                       ret = UCODE_NEW;
                }
 
                ucode_ptr += mc_size;
@@ -900,7 +934,7 @@ static enum ucode_state generic_load_microcode(int cpu, void *data, size_t size,
        pr_debug("CPU%d found a matching microcode update with version 0x%x (current=0x%x)\n",
                 cpu, new_rev, uci->cpu_sig.rev);
 
-       return UCODE_OK;
+       return ret;
 }
 
 static int get_ucode_fw(void *to, const void *from, size_t n)
index 2cfcfe4f6b2a054e52868f6fa49e35caf3c2fe54..dd2ad82eee80dfea0ef21c07ae4182694bd71b69 100644 (file)
@@ -75,6 +75,6 @@ void __init xen_hvm_init_mmu_ops(void)
        if (is_pagetable_dying_supported())
                pv_mmu_ops.exit_mmap = xen_hvm_exit_mmap;
 #ifdef CONFIG_PROC_VMCORE
-       register_oldmem_pfn_is_ram(&xen_oldmem_pfn_is_ram);
+       WARN_ON(register_oldmem_pfn_is_ram(&xen_oldmem_pfn_is_ram));
 #endif
 }
index ceefb9a706d64c3a520c80fb39bc8265b1c144c0..5d53e504acae58d02fa3d8c0b5cef4b625e758aa 100644 (file)
@@ -749,10 +749,11 @@ static void bfq_pd_offline(struct blkg_policy_data *pd)
        unsigned long flags;
        int i;
 
+       spin_lock_irqsave(&bfqd->lock, flags);
+
        if (!entity) /* root group */
-               return;
+               goto put_async_queues;
 
-       spin_lock_irqsave(&bfqd->lock, flags);
        /*
         * Empty all service_trees belonging to this group before
         * deactivating the group itself.
@@ -783,6 +784,8 @@ static void bfq_pd_offline(struct blkg_policy_data *pd)
        }
 
        __bfq_deactivate_entity(entity, false);
+
+put_async_queues:
        bfq_put_async_queues(bfqd, bfqg);
 
        spin_unlock_irqrestore(&bfqd->lock, flags);
index f1fb126a3be5830bcbe479fa3f9bd811722e0d0a..6f899669cbdd05b6af61ffad6becbea7fcfae094 100644 (file)
@@ -1928,7 +1928,8 @@ static void blk_mq_exit_hctx(struct request_queue *q,
 {
        blk_mq_debugfs_unregister_hctx(hctx);
 
-       blk_mq_tag_idle(hctx);
+       if (blk_mq_hw_queue_mapped(hctx))
+               blk_mq_tag_idle(hctx);
 
        if (set->ops->exit_request)
                set->ops->exit_request(set, hctx->fq->flush_rq, hctx_idx);
@@ -2314,6 +2315,9 @@ static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set,
        struct blk_mq_hw_ctx **hctxs = q->queue_hw_ctx;
 
        blk_mq_sysfs_unregister(q);
+
+       /* protect against switching io scheduler  */
+       mutex_lock(&q->sysfs_lock);
        for (i = 0; i < set->nr_hw_queues; i++) {
                int node;
 
@@ -2358,6 +2362,7 @@ static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set,
                }
        }
        q->nr_hw_queues = i;
+       mutex_unlock(&q->sysfs_lock);
        blk_mq_sysfs_register(q);
 }
 
@@ -2528,9 +2533,27 @@ static int blk_mq_alloc_rq_maps(struct blk_mq_tag_set *set)
 
 static int blk_mq_update_queue_map(struct blk_mq_tag_set *set)
 {
-       if (set->ops->map_queues)
+       if (set->ops->map_queues) {
+               int cpu;
+               /*
+                * transport .map_queues is usually done in the following
+                * way:
+                *
+                * for (queue = 0; queue < set->nr_hw_queues; queue++) {
+                *      mask = get_cpu_mask(queue)
+                *      for_each_cpu(cpu, mask)
+                *              set->mq_map[cpu] = queue;
+                * }
+                *
+                * When we need to remap, the table has to be cleared for
+                * killing stale mapping since one CPU may not be mapped
+                * to any hw queue.
+                */
+               for_each_possible_cpu(cpu)
+                       set->mq_map[cpu] = 0;
+
                return set->ops->map_queues(set);
-       else
+       else
                return blk_mq_map_queues(set);
 }
 
index d953cc1d57c4d50e7ab1ef48e15eb66c41a7b3e7..2ae78fb2a39016dc11124c3bc2ea175ac6442e1f 100644 (file)
@@ -98,6 +98,7 @@ obj-$(CONFIG_CRYPTO_TWOFISH_COMMON) += twofish_common.o
 obj-$(CONFIG_CRYPTO_SERPENT) += serpent_generic.o
 CFLAGS_serpent_generic.o := $(call cc-option,-fsched-pressure)  # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=79149
 obj-$(CONFIG_CRYPTO_AES) += aes_generic.o
+CFLAGS_aes_generic.o := $(call cc-ifversion, -ge, 0701, -Os) # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=83356
 obj-$(CONFIG_CRYPTO_AES_TI) += aes_ti.o
 obj-$(CONFIG_CRYPTO_CAMELLIA) += camellia_generic.o
 obj-$(CONFIG_CRYPTO_CAST_COMMON) += cast_common.o
index 0972ec0e2eb8cb867868401dcf1af2446a7f9696..f53ccc68023819ad0c33e6f6728ffc5ffe4ad232 100644 (file)
@@ -80,8 +80,8 @@ MODULE_PARM_DESC(report_key_events,
 static bool device_id_scheme = false;
 module_param(device_id_scheme, bool, 0444);
 
-static bool only_lcd = false;
-module_param(only_lcd, bool, 0444);
+static int only_lcd = -1;
+module_param(only_lcd, int, 0444);
 
 static int register_count;
 static DEFINE_MUTEX(register_count_mutex);
@@ -2136,6 +2136,16 @@ int acpi_video_register(void)
                goto leave;
        }
 
+       /*
+        * We're seeing a lot of bogus backlight interfaces on newer machines
+        * without a LCD such as desktops, servers and HDMI sticks. Checking
+        * the lcd flag fixes this, so enable this on any machines which are
+        * win8 ready (where we also prefer the native backlight driver, so
+        * normally the acpi_video code should not register there anyways).
+        */
+       if (only_lcd == -1)
+               only_lcd = acpi_osi_is_win8();
+
        dmi_check_system(video_dmi_table);
 
        ret = acpi_bus_register_driver(&acpi_video_bus);
index df842465634a9de096a7a21f2bf74f85528bfc65..6adcda057b361fd0070ffdbddb2b327be4ca82a6 100644 (file)
@@ -1516,7 +1516,7 @@ static int acpi_ec_setup(struct acpi_ec *ec, bool handle_events)
        }
 
        acpi_handle_info(ec->handle,
-                        "GPE=0x%lx, EC_CMD/EC_SC=0x%lx, EC_DATA=0x%lx\n",
+                        "GPE=0x%x, EC_CMD/EC_SC=0x%lx, EC_DATA=0x%lx\n",
                         ec->gpe, ec->command_addr, ec->data_addr);
        return ret;
 }
index 6c7dd7af789e453ce3d16ae80b849ef38bc4a3c0..dd70d6c2bca03eb227fe834fa14e33636f842d48 100644 (file)
@@ -128,7 +128,7 @@ static int acpi_ec_add_debugfs(struct acpi_ec *ec, unsigned int ec_device_count)
                return -ENOMEM;
        }
 
-       if (!debugfs_create_x32("gpe", 0444, dev_dir, (u32 *)&first_ec->gpe))
+       if (!debugfs_create_x32("gpe", 0444, dev_dir, &first_ec->gpe))
                goto error;
        if (!debugfs_create_bool("use_global_lock", 0444, dev_dir,
                                 &first_ec->global_lock))
index ede83d38beed50b7889e86c2d046641f158ae641..2cd2ae152ab73350fdea678091c22834c11ad722 100644 (file)
@@ -159,7 +159,7 @@ static inline void acpi_early_processor_osc(void) {}
    -------------------------------------------------------------------------- */
 struct acpi_ec {
        acpi_handle handle;
-       unsigned long gpe;
+       u32 gpe;
        unsigned long command_addr;
        unsigned long data_addr;
        bool global_lock;
index b2c0306f97eda77325f13431678ccc45037e248b..e9dff868c028022a423f9bb16fc76b1e36990247 100644 (file)
@@ -277,6 +277,7 @@ static const struct usb_device_id blacklist_table[] = {
        { USB_DEVICE(0x0489, 0xe09f), .driver_info = BTUSB_QCA_ROME },
        { USB_DEVICE(0x0489, 0xe0a2), .driver_info = BTUSB_QCA_ROME },
        { USB_DEVICE(0x04ca, 0x3011), .driver_info = BTUSB_QCA_ROME },
+       { USB_DEVICE(0x04ca, 0x3015), .driver_info = BTUSB_QCA_ROME },
        { USB_DEVICE(0x04ca, 0x3016), .driver_info = BTUSB_QCA_ROME },
 
        /* Broadcom BCM2035 */
index 5294442505cb5c7c2dcb34303596fd9a8236618c..0f1dc35e7078c2d4a3f7acffb207563a34f7f443 100644 (file)
@@ -328,7 +328,7 @@ unsigned long tpm_calc_ordinal_duration(struct tpm_chip *chip,
 }
 EXPORT_SYMBOL_GPL(tpm_calc_ordinal_duration);
 
-static bool tpm_validate_command(struct tpm_chip *chip,
+static int tpm_validate_command(struct tpm_chip *chip,
                                 struct tpm_space *space,
                                 const u8 *cmd,
                                 size_t len)
@@ -340,10 +340,10 @@ static bool tpm_validate_command(struct tpm_chip *chip,
        unsigned int nr_handles;
 
        if (len < TPM_HEADER_SIZE)
-               return false;
+               return -EINVAL;
 
        if (!space)
-               return true;
+               return 0;
 
        if (chip->flags & TPM_CHIP_FLAG_TPM2 && chip->nr_commands) {
                cc = be32_to_cpu(header->ordinal);
@@ -352,7 +352,7 @@ static bool tpm_validate_command(struct tpm_chip *chip,
                if (i < 0) {
                        dev_dbg(&chip->dev, "0x%04X is an invalid command\n",
                                cc);
-                       return false;
+                       return -EOPNOTSUPP;
                }
 
                attrs = chip->cc_attrs_tbl[i];
@@ -362,11 +362,11 @@ static bool tpm_validate_command(struct tpm_chip *chip,
                        goto err_len;
        }
 
-       return true;
+       return 0;
 err_len:
        dev_dbg(&chip->dev,
                "%s: insufficient command length %zu", __func__, len);
-       return false;
+       return -EINVAL;
 }
 
 /**
@@ -391,8 +391,20 @@ ssize_t tpm_transmit(struct tpm_chip *chip, struct tpm_space *space,
        unsigned long stop;
        bool need_locality;
 
-       if (!tpm_validate_command(chip, space, buf, bufsiz))
-               return -EINVAL;
+       rc = tpm_validate_command(chip, space, buf, bufsiz);
+       if (rc == -EINVAL)
+               return rc;
+       /*
+        * If the command is not implemented by the TPM, synthesize a
+        * response with a TPM2_RC_COMMAND_CODE return for user-space.
+        */
+       if (rc == -EOPNOTSUPP) {
+               header->length = cpu_to_be32(sizeof(*header));
+               header->tag = cpu_to_be16(TPM2_ST_NO_SESSIONS);
+               header->return_code = cpu_to_be32(TPM2_RC_COMMAND_CODE |
+                                                 TSS2_RESMGR_TPM_RC_LAYER);
+               return bufsiz;
+       }
 
        if (bufsiz > TPM_BUFSIZE)
                bufsiz = TPM_BUFSIZE;
index 2d5466a72e40f82b3272b857b74a1f822f82b966..0b5b499f726ad2551c359211010043e8e3a46667 100644 (file)
@@ -93,12 +93,17 @@ enum tpm2_structures {
        TPM2_ST_SESSIONS        = 0x8002,
 };
 
+/* Indicates from what layer of the software stack the error comes from */
+#define TSS2_RC_LAYER_SHIFT     16
+#define TSS2_RESMGR_TPM_RC_LAYER (11 << TSS2_RC_LAYER_SHIFT)
+
 enum tpm2_return_codes {
        TPM2_RC_SUCCESS         = 0x0000,
        TPM2_RC_HASH            = 0x0083, /* RC_FMT1 */
        TPM2_RC_HANDLE          = 0x008B,
        TPM2_RC_INITIALIZE      = 0x0100, /* RC_VER1 */
        TPM2_RC_DISABLED        = 0x0120,
+       TPM2_RC_COMMAND_CODE    = 0x0143,
        TPM2_RC_TESTING         = 0x090A, /* RC_WARN */
        TPM2_RC_REFERENCE_H0    = 0x0910,
 };
index 4ed516cb72764a18a29f8cd77efcc81aa7087c47..b49942b9fe50f69cb4ed4f549e4d0f0dbd87f6ce 100644 (file)
@@ -118,12 +118,11 @@ static unsigned int _get_val(const struct clk_div_table *table,
 unsigned long divider_recalc_rate(struct clk_hw *hw, unsigned long parent_rate,
                                  unsigned int val,
                                  const struct clk_div_table *table,
-                                 unsigned long flags)
+                                 unsigned long flags, unsigned long width)
 {
-       struct clk_divider *divider = to_clk_divider(hw);
        unsigned int div;
 
-       div = _get_div(table, val, flags, divider->width);
+       div = _get_div(table, val, flags, width);
        if (!div) {
                WARN(!(flags & CLK_DIVIDER_ALLOW_ZERO),
                        "%s: Zero divisor and CLK_DIVIDER_ALLOW_ZERO not set\n",
@@ -145,7 +144,7 @@ static unsigned long clk_divider_recalc_rate(struct clk_hw *hw,
        val &= div_mask(divider->width);
 
        return divider_recalc_rate(hw, parent_rate, val, divider->table,
-                                  divider->flags);
+                                  divider->flags, divider->width);
 }
 
 static bool _is_valid_table_div(const struct clk_div_table *table,
index a1c1f684ad585bbc5d64f3b7954473d442197398..9f46cf9dcc6529ea05e1d685825c17c7dac13339 100644 (file)
@@ -56,7 +56,7 @@ static unsigned long hi6220_clkdiv_recalc_rate(struct clk_hw *hw,
        val &= div_mask(dclk->width);
 
        return divider_recalc_rate(hw, parent_rate, val, dclk->table,
-                                  CLK_DIVIDER_ROUND_CLOSEST);
+                                  CLK_DIVIDER_ROUND_CLOSEST, dclk->width);
 }
 
 static long hi6220_clkdiv_round_rate(struct clk_hw *hw, unsigned long rate,
index 44a5a535ca6334ebe17acf74ca38fea45ab86708..5144360e2c804760720783098288dd3f104e5164 100644 (file)
@@ -98,7 +98,7 @@ static void params_from_rate(unsigned long requested_rate,
                *sdm = SDM_DEN - 1;
        } else {
                *n2 = div;
-               *sdm = DIV_ROUND_UP(rem * SDM_DEN, requested_rate);
+               *sdm = DIV_ROUND_UP_ULL((u64)rem * SDM_DEN, requested_rate);
        }
 }
 
index 7b359afd620ec0ad23bb6bf2e41da28d5fd4e05a..a6438f50e6db94845e53cfc0c4dce70880d749a5 100644 (file)
@@ -956,7 +956,7 @@ static unsigned long clk_divider_recalc_rate(struct clk_hw *hw,
        val &= div_mask(divider->width);
 
        return divider_recalc_rate(hw, parent_rate, val, divider->table,
-                                  divider->flags);
+                                  divider->flags, divider->width);
 }
 
 static long clk_divider_round_rate(struct clk_hw *hw, unsigned long rate,
index 53484912301eeed5d0d43012af1e2474d1c40d65..928fcc16ee278d1d3278ef873b1eb5d81619cdbb 100644 (file)
@@ -59,7 +59,7 @@ static unsigned long div_recalc_rate(struct clk_hw *hw,
        div &= BIT(divider->width) - 1;
 
        return divider_recalc_rate(hw, parent_rate, div, NULL,
-                                  CLK_DIVIDER_ROUND_CLOSEST);
+                                  CLK_DIVIDER_ROUND_CLOSEST, divider->width);
 }
 
 const struct clk_ops clk_regmap_div_ops = {
index f8203115a6bcea4dff571a5ed664e2cc12a35b31..c10160d7a556b034223a6923615f857666596165 100644 (file)
@@ -493,8 +493,8 @@ static SUNXI_CCU_MUX_WITH_GATE(tcon0_clk, "tcon0", tcon0_parents,
                                 0x118, 24, 3, BIT(31), CLK_SET_RATE_PARENT);
 
 static const char * const tcon1_parents[] = { "pll-video1" };
-static SUNXI_CCU_MUX_WITH_GATE(tcon1_clk, "tcon1", tcon1_parents,
-                                0x11c, 24, 3, BIT(31), CLK_SET_RATE_PARENT);
+static SUNXI_CCU_M_WITH_MUX_GATE(tcon1_clk, "tcon1", tcon1_parents,
+                                0x11c, 0, 4, 24, 2, BIT(31), CLK_SET_RATE_PARENT);
 
 static SUNXI_CCU_GATE(csi_misc_clk, "csi-misc", "osc24M", 0x130, BIT(16), 0);
 
index baa3cf96507b5285f94768cb12e734076e97cf93..302a18efd39fa568b8a7ef362ff194e77823e617 100644 (file)
@@ -71,7 +71,7 @@ static unsigned long ccu_div_recalc_rate(struct clk_hw *hw,
                                                  parent_rate);
 
        val = divider_recalc_rate(hw, parent_rate, val, cd->div.table,
-                                 cd->div.flags);
+                                 cd->div.flags, cd->div.width);
 
        if (cd->common.features & CCU_FEATURE_FIXED_POSTDIV)
                val /= cd->fixed_post_div;
index 7e1e5bbcf43042252194cf19b19a2fa77ce02e58..6b3a63545619403bf6f270a9a6a9f4a30da52b48 100644 (file)
 #define POWERNV_MAX_PSTATES    256
 #define PMSR_PSAFE_ENABLE      (1UL << 30)
 #define PMSR_SPR_EM_DISABLE    (1UL << 31)
-#define PMSR_MAX(x)            ((x >> 32) & 0xFF)
+#define MAX_PSTATE_SHIFT       32
 #define LPSTATE_SHIFT          48
 #define GPSTATE_SHIFT          56
-#define GET_LPSTATE(x)         (((x) >> LPSTATE_SHIFT) & 0xFF)
-#define GET_GPSTATE(x)         (((x) >> GPSTATE_SHIFT) & 0xFF)
 
 #define MAX_RAMP_DOWN_TIME                             5120
 /*
@@ -93,6 +91,7 @@ struct global_pstate_info {
 };
 
 static struct cpufreq_frequency_table powernv_freqs[POWERNV_MAX_PSTATES+1];
+u32 pstate_sign_prefix;
 static bool rebooting, throttled, occ_reset;
 
 static const char * const throttle_reason[] = {
@@ -147,6 +146,20 @@ static struct powernv_pstate_info {
        bool wof_enabled;
 } powernv_pstate_info;
 
+static inline int extract_pstate(u64 pmsr_val, unsigned int shift)
+{
+       int ret = ((pmsr_val >> shift) & 0xFF);
+
+       if (!ret)
+               return ret;
+
+       return (pstate_sign_prefix | ret);
+}
+
+#define extract_local_pstate(x) extract_pstate(x, LPSTATE_SHIFT)
+#define extract_global_pstate(x) extract_pstate(x, GPSTATE_SHIFT)
+#define extract_max_pstate(x)  extract_pstate(x, MAX_PSTATE_SHIFT)
+
 /* Use following macros for conversions between pstate_id and index */
 static inline int idx_to_pstate(unsigned int i)
 {
@@ -277,6 +290,9 @@ next:
 
        powernv_pstate_info.nr_pstates = nr_pstates;
        pr_debug("NR PStates %d\n", nr_pstates);
+
+       pstate_sign_prefix = pstate_min & ~0xFF;
+
        for (i = 0; i < nr_pstates; i++) {
                u32 id = be32_to_cpu(pstate_ids[i]);
                u32 freq = be32_to_cpu(pstate_freqs[i]);
@@ -437,17 +453,10 @@ struct powernv_smp_call_data {
 static void powernv_read_cpu_freq(void *arg)
 {
        unsigned long pmspr_val;
-       s8 local_pstate_id;
        struct powernv_smp_call_data *freq_data = arg;
 
        pmspr_val = get_pmspr(SPRN_PMSR);
-
-       /*
-        * The local pstate id corresponds bits 48..55 in the PMSR.
-        * Note: Watch out for the sign!
-        */
-       local_pstate_id = (pmspr_val >> 48) & 0xFF;
-       freq_data->pstate_id = local_pstate_id;
+       freq_data->pstate_id = extract_local_pstate(pmspr_val);
        freq_data->freq = pstate_id_to_freq(freq_data->pstate_id);
 
        pr_debug("cpu %d pmsr %016lX pstate_id %d frequency %d kHz\n",
@@ -521,7 +530,7 @@ static void powernv_cpufreq_throttle_check(void *data)
        chip = this_cpu_read(chip_info);
 
        /* Check for Pmax Capping */
-       pmsr_pmax = (s8)PMSR_MAX(pmsr);
+       pmsr_pmax = extract_max_pstate(pmsr);
        pmsr_pmax_idx = pstate_to_idx(pmsr_pmax);
        if (pmsr_pmax_idx != powernv_pstate_info.max) {
                if (chip->throttled)
@@ -644,8 +653,8 @@ void gpstate_timer_handler(unsigned long data)
         * value. Hence, read from PMCR to get correct data.
         */
        val = get_pmspr(SPRN_PMCR);
-       freq_data.gpstate_id = (s8)GET_GPSTATE(val);
-       freq_data.pstate_id = (s8)GET_LPSTATE(val);
+       freq_data.gpstate_id = extract_global_pstate(val);
+       freq_data.pstate_id = extract_local_pstate(val);
        if (freq_data.gpstate_id  == freq_data.pstate_id) {
                reset_gpstates(policy);
                spin_unlock(&gpstates->gpstate_lock);
index 202476fbbc4c0256bf7793a7f560f8f686081350..8a411514a7c5dd4818494ce893d5823b1b57125f 100644 (file)
@@ -935,7 +935,8 @@ static ssize_t governor_store(struct device *dev, struct device_attribute *attr,
        if (df->governor == governor) {
                ret = 0;
                goto out;
-       } else if (df->governor->immutable || governor->immutable) {
+       } else if ((df->governor && df->governor->immutable) ||
+                                       governor->immutable) {
                ret = -EINVAL;
                goto out;
        }
index ec5d695bbb7264d0fffd8d3d8eaf74a0d2a66905..3c68bb525d5da3aa9c41c512e14fde1acf7cf0cd 100644 (file)
@@ -758,7 +758,7 @@ static int mv64x60_mc_err_probe(struct platform_device *pdev)
                /* Non-ECC RAM? */
                printk(KERN_WARNING "%s: No ECC DIMMs discovered\n", __func__);
                res = -ENODEV;
-               goto err2;
+               goto err;
        }
 
        edac_dbg(3, "init mci\n");
index 57efb251f9c462ceb6a4b5015f3658707fa119d0..10523ce00c387e2ffaf5c311452dedd4cdd4120c 100644 (file)
@@ -566,8 +566,10 @@ static int thunderx_gpio_probe(struct pci_dev *pdev,
        txgpio->irqd = irq_domain_create_hierarchy(irq_get_irq_data(txgpio->msix_entries[0].vector)->domain,
                                                   0, 0, of_node_to_fwnode(dev->of_node),
                                                   &thunderx_gpio_irqd_ops, txgpio);
-       if (!txgpio->irqd)
+       if (!txgpio->irqd) {
+               err = -ENOMEM;
                goto out;
+       }
 
        /* Push on irq_data and the domain for each line. */
        for (i = 0; i < ngpio; i++) {
index bdd68ff197dc3fa6614560ab86f63b55841629ba..b4c8b25453a6800047954bcaf5bd58b5acdbd6db 100644 (file)
@@ -3340,7 +3340,8 @@ struct gpio_desc *__must_check gpiod_get_index(struct device *dev,
                return desc;
        }
 
-       status = gpiod_request(desc, con_id);
+       /* If a connection label was passed use that, else use the device name as label */
+       status = gpiod_request(desc, con_id ? con_id : dev_name(dev));
        if (status < 0)
                return ERR_PTR(status);
 
index fe15aa64086f213a642d244f78a08c1bb657098c..71fe60e5f01f1e05e99b45d35db3e47e3dba0bf6 100644 (file)
@@ -698,7 +698,7 @@ static unsigned long dsi_pll_14nm_postdiv_recalc_rate(struct clk_hw *hw,
        val &= div_mask(width);
 
        return divider_recalc_rate(hw, parent_rate, val, NULL,
-                                  postdiv->flags);
+                                  postdiv->flags, width);
 }
 
 static long dsi_pll_14nm_postdiv_round_rate(struct clk_hw *hw,
index 62e38fa8cda23c790635664364ba90e6d8292414..e362a932fe8c03f4d5029c401afd57a9d28f64b6 100644 (file)
@@ -95,18 +95,20 @@ enum ina2xx_ids { ina219, ina226 };
 
 struct ina2xx_config {
        u16 config_default;
-       int calibration_factor;
+       int calibration_value;
        int registers;
        int shunt_div;
        int bus_voltage_shift;
        int bus_voltage_lsb;    /* uV */
-       int power_lsb;          /* uW */
+       int power_lsb_factor;
 };
 
 struct ina2xx_data {
        const struct ina2xx_config *config;
 
        long rshunt;
+       long current_lsb_uA;
+       long power_lsb_uW;
        struct mutex config_lock;
        struct regmap *regmap;
 
@@ -116,21 +118,21 @@ struct ina2xx_data {
 static const struct ina2xx_config ina2xx_config[] = {
        [ina219] = {
                .config_default = INA219_CONFIG_DEFAULT,
-               .calibration_factor = 40960000,
+               .calibration_value = 4096,
                .registers = INA219_REGISTERS,
                .shunt_div = 100,
                .bus_voltage_shift = 3,
                .bus_voltage_lsb = 4000,
-               .power_lsb = 20000,
+               .power_lsb_factor = 20,
        },
        [ina226] = {
                .config_default = INA226_CONFIG_DEFAULT,
-               .calibration_factor = 5120000,
+               .calibration_value = 2048,
                .registers = INA226_REGISTERS,
                .shunt_div = 400,
                .bus_voltage_shift = 0,
                .bus_voltage_lsb = 1250,
-               .power_lsb = 25000,
+               .power_lsb_factor = 25,
        },
 };
 
@@ -169,12 +171,16 @@ static u16 ina226_interval_to_reg(int interval)
        return INA226_SHIFT_AVG(avg_bits);
 }
 
+/*
+ * Calibration register is set to the best value, which eliminates
+ * truncation errors on calculating current register in hardware.
+ * According to datasheet (eq. 3) the best values are 2048 for
+ * ina226 and 4096 for ina219. They are hardcoded as calibration_value.
+ */
 static int ina2xx_calibrate(struct ina2xx_data *data)
 {
-       u16 val = DIV_ROUND_CLOSEST(data->config->calibration_factor,
-                                   data->rshunt);
-
-       return regmap_write(data->regmap, INA2XX_CALIBRATION, val);
+       return regmap_write(data->regmap, INA2XX_CALIBRATION,
+                           data->config->calibration_value);
 }
 
 /*
@@ -187,10 +193,6 @@ static int ina2xx_init(struct ina2xx_data *data)
        if (ret < 0)
                return ret;
 
-       /*
-        * Set current LSB to 1mA, shunt is in uOhms
-        * (equation 13 in datasheet).
-        */
        return ina2xx_calibrate(data);
 }
 
@@ -268,15 +270,15 @@ static int ina2xx_get_value(struct ina2xx_data *data, u8 reg,
                val = DIV_ROUND_CLOSEST(val, 1000);
                break;
        case INA2XX_POWER:
-               val = regval * data->config->power_lsb;
+               val = regval * data->power_lsb_uW;
                break;
        case INA2XX_CURRENT:
-               /* signed register, LSB=1mA (selected), in mA */
-               val = (s16)regval;
+               /* signed register, result in mA */
+               val = regval * data->current_lsb_uA;
+               val = DIV_ROUND_CLOSEST(val, 1000);
                break;
        case INA2XX_CALIBRATION:
-               val = DIV_ROUND_CLOSEST(data->config->calibration_factor,
-                                       regval);
+               val = regval;
                break;
        default:
                /* programmer goofed */
@@ -304,9 +306,32 @@ static ssize_t ina2xx_show_value(struct device *dev,
                        ina2xx_get_value(data, attr->index, regval));
 }
 
-static ssize_t ina2xx_set_shunt(struct device *dev,
-                               struct device_attribute *da,
-                               const char *buf, size_t count)
+/*
+ * In order to keep calibration register value fixed, the product
+ * of current_lsb and shunt_resistor should also be fixed and equal
+ * to shunt_voltage_lsb = 1 / shunt_div multiplied by 10^9 in order
+ * to keep the scale.
+ */
+static int ina2xx_set_shunt(struct ina2xx_data *data, long val)
+{
+       unsigned int dividend = DIV_ROUND_CLOSEST(1000000000,
+                                                 data->config->shunt_div);
+       if (val <= 0 || val > dividend)
+               return -EINVAL;
+
+       mutex_lock(&data->config_lock);
+       data->rshunt = val;
+       data->current_lsb_uA = DIV_ROUND_CLOSEST(dividend, val);
+       data->power_lsb_uW = data->config->power_lsb_factor *
+                            data->current_lsb_uA;
+       mutex_unlock(&data->config_lock);
+
+       return 0;
+}
+
+static ssize_t ina2xx_store_shunt(struct device *dev,
+                                 struct device_attribute *da,
+                                 const char *buf, size_t count)
 {
        unsigned long val;
        int status;
@@ -316,18 +341,9 @@ static ssize_t ina2xx_set_shunt(struct device *dev,
        if (status < 0)
                return status;
 
-       if (val == 0 ||
-           /* Values greater than the calibration factor make no sense. */
-           val > data->config->calibration_factor)
-               return -EINVAL;
-
-       mutex_lock(&data->config_lock);
-       data->rshunt = val;
-       status = ina2xx_calibrate(data);
-       mutex_unlock(&data->config_lock);
+       status = ina2xx_set_shunt(data, val);
        if (status < 0)
                return status;
-
        return count;
 }
 
@@ -387,7 +403,7 @@ static SENSOR_DEVICE_ATTR(power1_input, S_IRUGO, ina2xx_show_value, NULL,
 
 /* shunt resistance */
 static SENSOR_DEVICE_ATTR(shunt_resistor, S_IRUGO | S_IWUSR,
-                         ina2xx_show_value, ina2xx_set_shunt,
+                         ina2xx_show_value, ina2xx_store_shunt,
                          INA2XX_CALIBRATION);
 
 /* update interval (ina226 only) */
@@ -448,10 +464,7 @@ static int ina2xx_probe(struct i2c_client *client,
                        val = INA2XX_RSHUNT_DEFAULT;
        }
 
-       if (val <= 0 || val > data->config->calibration_factor)
-               return -ENODEV;
-
-       data->rshunt = val;
+       ina2xx_set_shunt(data, val);
 
        ina2xx_regmap_config.max_register = data->config->registers;
 
index 6cae00ecc90586f3abf4c5d5ecab4aa725261e24..25de7cc9f49f4f42d9368a79984bcf6bf3a693a6 100644 (file)
@@ -4453,6 +4453,7 @@ static int cma_get_id_stats(struct sk_buff *skb, struct netlink_callback *cb)
                        id_stats->qp_type       = id->qp_type;
 
                        i_id++;
+                       nlmsg_end(skb, nlh);
                }
 
                cb->args[1] = 0;
index 722235bed0759843c6704ab81169e8d2d38a922b..d6fa38f8604f27a74c4ed25086f4e800eaa90358 100644 (file)
@@ -914,13 +914,14 @@ static ssize_t ucma_query_path(struct ucma_context *ctx,
 
                resp->path_data[i].flags = IB_PATH_GMP | IB_PATH_PRIMARY |
                                           IB_PATH_BIDIRECTIONAL;
-               if (rec->rec_type == SA_PATH_REC_TYPE_IB) {
-                       ib_sa_pack_path(rec, &resp->path_data[i].path_rec);
-               } else {
+               if (rec->rec_type == SA_PATH_REC_TYPE_OPA) {
                        struct sa_path_rec ib;
 
                        sa_convert_path_opa_to_ib(&ib, rec);
                        ib_sa_pack_path(&ib, &resp->path_data[i].path_rec);
+
+               } else {
+                       ib_sa_pack_path(rec, &resp->path_data[i].path_rec);
                }
        }
 
index d6a1a308c6a096411fba8c9369e1551575c8b8a3..b7f1ce5333cb828d2da4ce4fa2732587d3133a8c 100644 (file)
@@ -125,7 +125,8 @@ static u8 i40iw_derive_hw_ird_setting(u16 cm_ird)
  * @conn_ird: connection IRD
  * @conn_ord: connection ORD
  */
-static void i40iw_record_ird_ord(struct i40iw_cm_node *cm_node, u16 conn_ird, u16 conn_ord)
+static void i40iw_record_ird_ord(struct i40iw_cm_node *cm_node, u32 conn_ird,
+                                u32 conn_ord)
 {
        if (conn_ird > I40IW_MAX_IRD_SIZE)
                conn_ird = I40IW_MAX_IRD_SIZE;
@@ -3841,7 +3842,7 @@ int i40iw_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
        }
 
        cm_node->apbvt_set = true;
-       i40iw_record_ird_ord(cm_node, (u16)conn_param->ird, (u16)conn_param->ord);
+       i40iw_record_ird_ord(cm_node, conn_param->ird, conn_param->ord);
        if (cm_node->send_rdma0_op == SEND_RDMA_READ_ZERO &&
            !cm_node->ord_size)
                cm_node->ord_size = 1;
index d86f3e6708040599fb61fc12190bb3d8c98c15d4..472ef4d6e858a7cd97bcaa5828f92ad0ff20bfcc 100644 (file)
@@ -3875,8 +3875,10 @@ enum i40iw_status_code i40iw_config_fpm_values(struct i40iw_sc_dev *dev, u32 qp_
                hmc_info->hmc_obj[I40IW_HMC_IW_APBVT_ENTRY].cnt = 1;
                hmc_info->hmc_obj[I40IW_HMC_IW_MR].cnt = mrwanted;
 
-               hmc_info->hmc_obj[I40IW_HMC_IW_XF].cnt = I40IW_MAX_WQ_ENTRIES * qpwanted;
-               hmc_info->hmc_obj[I40IW_HMC_IW_Q1].cnt = 4 * I40IW_MAX_IRD_SIZE * qpwanted;
+               hmc_info->hmc_obj[I40IW_HMC_IW_XF].cnt =
+                       roundup_pow_of_two(I40IW_MAX_WQ_ENTRIES * qpwanted);
+               hmc_info->hmc_obj[I40IW_HMC_IW_Q1].cnt =
+                       roundup_pow_of_two(2 * I40IW_MAX_IRD_SIZE * qpwanted);
                hmc_info->hmc_obj[I40IW_HMC_IW_XFFL].cnt =
                        hmc_info->hmc_obj[I40IW_HMC_IW_XF].cnt / hmc_fpm_misc->xf_block_size;
                hmc_info->hmc_obj[I40IW_HMC_IW_Q1FL].cnt =
index 24eabcad5e405c830cd6a1e34d7d830672f5126b..019ad3b939f97cda799a03023b12599757653127 100644 (file)
@@ -93,6 +93,7 @@
 #define RDMA_OPCODE_MASK        0x0f
 #define RDMA_READ_REQ_OPCODE    1
 #define Q2_BAD_FRAME_OFFSET     72
+#define Q2_FPSN_OFFSET          64
 #define CQE_MAJOR_DRV           0x8000
 
 #define I40IW_TERM_SENT 0x01
index 59f70676f0e0305ad03567192a340ecb0ef0c7a5..14d38d733cb4e9e0a44697a93d613446d472adce 100644 (file)
@@ -1376,7 +1376,7 @@ static void i40iw_ieq_handle_exception(struct i40iw_puda_rsrc *ieq,
        u32 *hw_host_ctx = (u32 *)qp->hw_host_ctx;
        u32 rcv_wnd = hw_host_ctx[23];
        /* first partial seq # in q2 */
-       u32 fps = qp->q2_buf[16];
+       u32 fps = *(u32 *)(qp->q2_buf + Q2_FPSN_OFFSET);
        struct list_head *rxlist = &pfpdu->rxlist;
        struct list_head *plist;
 
index 97d71e49c092f4e820efc4ebe23ec2bbdc411cbd..88fa4d44ab5fbe8e504301b6990625bf4990edb7 100644 (file)
@@ -198,7 +198,7 @@ struct ib_cq *rvt_create_cq(struct ib_device *ibdev,
                return ERR_PTR(-EINVAL);
 
        /* Allocate the completion queue structure. */
-       cq = kzalloc(sizeof(*cq), GFP_KERNEL);
+       cq = kzalloc_node(sizeof(*cq), GFP_KERNEL, rdi->dparms.node);
        if (!cq)
                return ERR_PTR(-ENOMEM);
 
@@ -214,7 +214,9 @@ struct ib_cq *rvt_create_cq(struct ib_device *ibdev,
                sz += sizeof(struct ib_uverbs_wc) * (entries + 1);
        else
                sz += sizeof(struct ib_wc) * (entries + 1);
-       wc = vmalloc_user(sz);
+       wc = udata ?
+               vmalloc_user(sz) :
+               vzalloc_node(sz, rdi->dparms.node);
        if (!wc) {
                ret = ERR_PTR(-ENOMEM);
                goto bail_cq;
@@ -369,7 +371,9 @@ int rvt_resize_cq(struct ib_cq *ibcq, int cqe, struct ib_udata *udata)
                sz += sizeof(struct ib_uverbs_wc) * (cqe + 1);
        else
                sz += sizeof(struct ib_wc) * (cqe + 1);
-       wc = vmalloc_user(sz);
+       wc = udata ?
+               vmalloc_user(sz) :
+               vzalloc_node(sz, rdi->dparms.node);
        if (!wc)
                return -ENOMEM;
 
index b3bbad7d228296118f35a2d4bff7c295b5e9839c..5dafafad6351a09362484ba011991dea4eb9b174 100644 (file)
@@ -808,8 +808,10 @@ static int __maybe_unused goodix_suspend(struct device *dev)
        int error;
 
        /* We need gpio pins to suspend/resume */
-       if (!ts->gpiod_int || !ts->gpiod_rst)
+       if (!ts->gpiod_int || !ts->gpiod_rst) {
+               disable_irq(client->irq);
                return 0;
+       }
 
        wait_for_completion(&ts->firmware_loading_complete);
 
@@ -849,8 +851,10 @@ static int __maybe_unused goodix_resume(struct device *dev)
        struct goodix_ts_data *ts = i2c_get_clientdata(client);
        int error;
 
-       if (!ts->gpiod_int || !ts->gpiod_rst)
+       if (!ts->gpiod_int || !ts->gpiod_rst) {
+               enable_irq(client->irq);
                return 0;
+       }
 
        /*
         * Exit sleep mode by outputting HIGH level to INT pin
index ae9ff72e83ee4794447df6ff01668ba588b316e3..848fcdf6a11242346f9f40ac18c5a5ce7bed344d 100644 (file)
@@ -1297,6 +1297,10 @@ gic_acpi_parse_madt_gicc(struct acpi_subtable_header *header,
        u32 size = reg == GIC_PIDR2_ARCH_GICv4 ? SZ_64K * 4 : SZ_64K * 2;
        void __iomem *redist_base;
 
+       /* GICC entry which has !ACPI_MADT_ENABLED is not unusable so skip */
+       if (!(gicc->flags & ACPI_MADT_ENABLED))
+               return 0;
+
        redist_base = ioremap(gicc->gicr_base_address, size);
        if (!redist_base)
                return -ENOMEM;
@@ -1346,6 +1350,13 @@ static int __init gic_acpi_match_gicc(struct acpi_subtable_header *header,
        if ((gicc->flags & ACPI_MADT_ENABLED) && gicc->gicr_base_address)
                return 0;
 
+       /*
+        * It's perfectly valid firmware can pass disabled GICC entry, driver
+        * should not treat as errors, skip the entry instead of probe fail.
+        */
+       if (!(gicc->flags & ACPI_MADT_ENABLED))
+               return 0;
+
        return -ENODEV;
 }
 
index 934b1fce4ce1f0660d8c761eb0ec9822147fb754..f0dc8e2aee65a010ff27fbcbcea08bc3dc9126a6 100644 (file)
@@ -515,15 +515,21 @@ struct open_bucket {
 
 /*
  * We keep multiple buckets open for writes, and try to segregate different
- * write streams for better cache utilization: first we look for a bucket where
- * the last write to it was sequential with the current write, and failing that
- * we look for a bucket that was last used by the same task.
+ * write streams for better cache utilization: first we try to segregate flash
+ * only volume write streams from cached devices, secondly we look for a bucket
+ * where the last write to it was sequential with the current write, and
+ * failing that we look for a bucket that was last used by the same task.
  *
  * The ideas is if you've got multiple tasks pulling data into the cache at the
  * same time, you'll get better cache utilization if you try to segregate their
  * data and preserve locality.
  *
- * For example, say you've starting Firefox at the same time you're copying a
+ * For example, dirty sectors of flash only volume is not reclaimable, if their
+ * dirty sectors mixed with dirty sectors of cached device, such buckets will
+ * be marked as dirty and won't be reclaimed, though the dirty data of cached
+ * device have been written back to backend device.
+ *
+ * And say you've starting Firefox at the same time you're copying a
  * bunch of files. Firefox will likely end up being fairly hot and stay in the
  * cache awhile, but the data you copied might not be; if you wrote all that
  * data to the same buckets it'd get invalidated at the same time.
@@ -540,7 +546,10 @@ static struct open_bucket *pick_data_bucket(struct cache_set *c,
        struct open_bucket *ret, *ret_task = NULL;
 
        list_for_each_entry_reverse(ret, &c->data_buckets, list)
-               if (!bkey_cmp(&ret->key, search))
+               if (UUID_FLASH_ONLY(&c->uuids[KEY_INODE(&ret->key)]) !=
+                   UUID_FLASH_ONLY(&c->uuids[KEY_INODE(search)]))
+                       continue;
+               else if (!bkey_cmp(&ret->key, search))
                        goto found;
                else if (ret->last_write_point == write_point)
                        ret_task = ret;
index e9fbf2bcd122b8d47b9cbd012cc408c2a0db534c..f34ad8720756043f0078dd0721874d477eddfe71 100644 (file)
@@ -568,6 +568,7 @@ static void cache_lookup(struct closure *cl)
 {
        struct search *s = container_of(cl, struct search, iop.cl);
        struct bio *bio = &s->bio.bio;
+       struct cached_dev *dc;
        int ret;
 
        bch_btree_op_init(&s->op, -1);
@@ -580,6 +581,27 @@ static void cache_lookup(struct closure *cl)
                return;
        }
 
+       /*
+        * We might meet err when searching the btree, If that happens, we will
+        * get negative ret, in this scenario we should not recover data from
+        * backing device (when cache device is dirty) because we don't know
+        * whether bkeys the read request covered are all clean.
+        *
+        * And after that happened, s->iop.status is still its initial value
+        * before we submit s->bio.bio
+        */
+       if (ret < 0) {
+               BUG_ON(ret == -EINTR);
+               if (s->d && s->d->c &&
+                               !UUID_FLASH_ONLY(&s->d->c->uuids[s->d->id])) {
+                       dc = container_of(s->d, struct cached_dev, disk);
+                       if (dc && atomic_read(&dc->has_dirty))
+                               s->recoverable = false;
+               }
+               if (!s->iop.status)
+                       s->iop.status = BLK_STS_IOERR;
+       }
+
        closure_return(cl);
 }
 
index 9417170f180a9ff09f3e48d0a99057bd2754e316..5d0430777ddabaeee4fe280c96374b2e696f035f 100644 (file)
@@ -893,6 +893,12 @@ static void cached_dev_detach_finish(struct work_struct *w)
 
        mutex_lock(&bch_register_lock);
 
+       cancel_delayed_work_sync(&dc->writeback_rate_update);
+       if (!IS_ERR_OR_NULL(dc->writeback_thread)) {
+               kthread_stop(dc->writeback_thread);
+               dc->writeback_thread = NULL;
+       }
+
        memset(&dc->sb.set_uuid, 0, 16);
        SET_BDEV_STATE(&dc->sb, BDEV_STATE_NONE);
 
index cb115ba6a1d280e339247f14759cfe0c0a6fab6a..6d9adcaa26bad0614037969075af6ff4bde3410f 100644 (file)
@@ -332,6 +332,10 @@ static int __vb2_queue_alloc(struct vb2_queue *q, enum vb2_memory memory,
        struct vb2_buffer *vb;
        int ret;
 
+       /* Ensure that q->num_buffers+num_buffers is below VB2_MAX_FRAME */
+       num_buffers = min_t(unsigned int, num_buffers,
+                           VB2_MAX_FRAME - q->num_buffers);
+
        for (buffer = 0; buffer < num_buffers; ++buffer) {
                /* Allocate videobuf buffer structures */
                vb = kzalloc(q->buf_struct_size, GFP_KERNEL);
index 070f5da06fd20dcfee3decd947f1cee83bc5d6af..5bedf4b7f0f789dd2f7539225ca7dbc25e8bfe5e 100644 (file)
@@ -806,6 +806,8 @@ static int intel_mrfld_mmc_probe_slot(struct sdhci_pci_slot *slot)
                slot->host->quirks2 |= SDHCI_QUIRK2_NO_1_8_V;
                break;
        case INTEL_MRFLD_SDIO:
+               /* Advertise 2.0v for compatibility with the SDIO card's OCR */
+               slot->host->ocr_mask = MMC_VDD_20_21 | MMC_VDD_165_195;
                slot->host->mmc->caps |= MMC_CAP_NONREMOVABLE |
                                         MMC_CAP_POWER_OFF_CARD;
                break;
index 90cc1977b79231b413604bc42c6c9717f80ee7e4..d35deb79965d2203f010c09223c723e4074b4b5b 100644 (file)
@@ -1470,6 +1470,13 @@ void sdhci_set_power_noreg(struct sdhci_host *host, unsigned char mode,
        if (mode != MMC_POWER_OFF) {
                switch (1 << vdd) {
                case MMC_VDD_165_195:
+               /*
+                * Without a regulator, SDHCI does not support 2.0v
+                * so we only get here if the driver deliberately
+                * added the 2.0v range to ocr_avail. Map it to 1.8v
+                * for the purpose of turning on the power.
+                */
+               case MMC_VDD_20_21:
                        pwr = SDHCI_POWER_180;
                        break;
                case MMC_VDD_29_30:
index 1cb3f7758fb60d8b084c27cc74d2098d5b7a691f..766b2c38568240ceb8505c52ea049f4543a581fc 100644 (file)
@@ -193,6 +193,9 @@ static int verify_eraseblock(int ebnum)
                ops.datbuf    = NULL;
                ops.oobbuf    = readbuf;
                err = mtd_read_oob(mtd, addr, &ops);
+               if (mtd_is_bitflip(err))
+                       err = 0;
+
                if (err || ops.oobretlen != use_len) {
                        pr_err("error: readoob failed at %#llx\n",
                               (long long)addr);
@@ -227,6 +230,9 @@ static int verify_eraseblock(int ebnum)
                        ops.datbuf    = NULL;
                        ops.oobbuf    = readbuf;
                        err = mtd_read_oob(mtd, addr, &ops);
+                       if (mtd_is_bitflip(err))
+                               err = 0;
+
                        if (err || ops.oobretlen != mtd->oobavail) {
                                pr_err("error: readoob failed at %#llx\n",
                                                (long long)addr);
@@ -286,6 +292,9 @@ static int verify_eraseblock_in_one_go(int ebnum)
 
        /* read entire block's OOB at one go */
        err = mtd_read_oob(mtd, addr, &ops);
+       if (mtd_is_bitflip(err))
+               err = 0;
+
        if (err || ops.oobretlen != len) {
                pr_err("error: readoob failed at %#llx\n",
                       (long long)addr);
@@ -527,6 +536,9 @@ static int __init mtd_oobtest_init(void)
        pr_info("attempting to start read past end of OOB\n");
        pr_info("an error is expected...\n");
        err = mtd_read_oob(mtd, addr0, &ops);
+       if (mtd_is_bitflip(err))
+               err = 0;
+
        if (err) {
                pr_info("error occurred as expected\n");
                err = 0;
@@ -571,6 +583,9 @@ static int __init mtd_oobtest_init(void)
                pr_info("attempting to read past end of device\n");
                pr_info("an error is expected...\n");
                err = mtd_read_oob(mtd, mtd->size - mtd->writesize, &ops);
+               if (mtd_is_bitflip(err))
+                       err = 0;
+
                if (err) {
                        pr_info("error occurred as expected\n");
                        err = 0;
@@ -615,6 +630,9 @@ static int __init mtd_oobtest_init(void)
                pr_info("attempting to read past end of device\n");
                pr_info("an error is expected...\n");
                err = mtd_read_oob(mtd, mtd->size - mtd->writesize, &ops);
+               if (mtd_is_bitflip(err))
+                       err = 0;
+
                if (err) {
                        pr_info("error occurred as expected\n");
                        err = 0;
@@ -684,6 +702,9 @@ static int __init mtd_oobtest_init(void)
                ops.datbuf    = NULL;
                ops.oobbuf    = readbuf;
                err = mtd_read_oob(mtd, addr, &ops);
+               if (mtd_is_bitflip(err))
+                       err = 0;
+
                if (err)
                        goto out;
                if (memcmpshow(addr, readbuf, writebuf,
index b2db581131b2d49a843c1f548e7341a12e395127..82f28ffccddffdb01fbd6635775d07fe4e75ccd4 100644 (file)
@@ -1524,39 +1524,6 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev)
                        goto err_close;
        }
 
-       /* If the mode uses primary, then the following is handled by
-        * bond_change_active_slave().
-        */
-       if (!bond_uses_primary(bond)) {
-               /* set promiscuity level to new slave */
-               if (bond_dev->flags & IFF_PROMISC) {
-                       res = dev_set_promiscuity(slave_dev, 1);
-                       if (res)
-                               goto err_close;
-               }
-
-               /* set allmulti level to new slave */
-               if (bond_dev->flags & IFF_ALLMULTI) {
-                       res = dev_set_allmulti(slave_dev, 1);
-                       if (res)
-                               goto err_close;
-               }
-
-               netif_addr_lock_bh(bond_dev);
-
-               dev_mc_sync_multiple(slave_dev, bond_dev);
-               dev_uc_sync_multiple(slave_dev, bond_dev);
-
-               netif_addr_unlock_bh(bond_dev);
-       }
-
-       if (BOND_MODE(bond) == BOND_MODE_8023AD) {
-               /* add lacpdu mc addr to mc list */
-               u8 lacpdu_multicast[ETH_ALEN] = MULTICAST_LACPDU_ADDR;
-
-               dev_mc_add(slave_dev, lacpdu_multicast);
-       }
-
        res = vlan_vids_add_by_dev(slave_dev, bond_dev);
        if (res) {
                netdev_err(bond_dev, "Couldn't add bond vlan ids to %s\n",
@@ -1721,6 +1688,40 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev)
                goto err_upper_unlink;
        }
 
+       /* If the mode uses primary, then the following is handled by
+        * bond_change_active_slave().
+        */
+       if (!bond_uses_primary(bond)) {
+               /* set promiscuity level to new slave */
+               if (bond_dev->flags & IFF_PROMISC) {
+                       res = dev_set_promiscuity(slave_dev, 1);
+                       if (res)
+                               goto err_sysfs_del;
+               }
+
+               /* set allmulti level to new slave */
+               if (bond_dev->flags & IFF_ALLMULTI) {
+                       res = dev_set_allmulti(slave_dev, 1);
+                       if (res) {
+                               if (bond_dev->flags & IFF_PROMISC)
+                                       dev_set_promiscuity(slave_dev, -1);
+                               goto err_sysfs_del;
+                       }
+               }
+
+               netif_addr_lock_bh(bond_dev);
+               dev_mc_sync_multiple(slave_dev, bond_dev);
+               dev_uc_sync_multiple(slave_dev, bond_dev);
+               netif_addr_unlock_bh(bond_dev);
+
+               if (BOND_MODE(bond) == BOND_MODE_8023AD) {
+                       /* add lacpdu mc addr to mc list */
+                       u8 lacpdu_multicast[ETH_ALEN] = MULTICAST_LACPDU_ADDR;
+
+                       dev_mc_add(slave_dev, lacpdu_multicast);
+               }
+       }
+
        bond->slave_cnt++;
        bond_compute_features(bond);
        bond_set_carrier(bond);
@@ -1744,6 +1745,9 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev)
        return 0;
 
 /* Undo stages on error */
+err_sysfs_del:
+       bond_sysfs_slave_del(new_slave);
+
 err_upper_unlink:
        bond_upper_dev_unlink(bond, new_slave);
 
@@ -1751,9 +1755,6 @@ err_unregister:
        netdev_rx_handler_unregister(slave_dev);
 
 err_detach:
-       if (!bond_uses_primary(bond))
-               bond_hw_addr_flush(bond_dev, slave_dev);
-
        vlan_vids_del_by_dev(slave_dev, bond_dev);
        if (rcu_access_pointer(bond->primary_slave) == new_slave)
                RCU_INIT_POINTER(bond->primary_slave, NULL);
index 05498e7f284034d86a8f8531de95cf7f59ec4890..6246003f9922f1a6fce43fca209afc525183ef7a 100644 (file)
@@ -2619,8 +2619,8 @@ void t4vf_sge_stop(struct adapter *adapter)
 int t4vf_sge_init(struct adapter *adapter)
 {
        struct sge_params *sge_params = &adapter->params.sge;
-       u32 fl0 = sge_params->sge_fl_buffer_size[0];
-       u32 fl1 = sge_params->sge_fl_buffer_size[1];
+       u32 fl_small_pg = sge_params->sge_fl_buffer_size[0];
+       u32 fl_large_pg = sge_params->sge_fl_buffer_size[1];
        struct sge *s = &adapter->sge;
 
        /*
@@ -2628,9 +2628,20 @@ int t4vf_sge_init(struct adapter *adapter)
         * the Physical Function Driver.  Ideally we should be able to deal
         * with _any_ configuration.  Practice is different ...
         */
-       if (fl0 != PAGE_SIZE || (fl1 != 0 && fl1 <= fl0)) {
+
+       /* We only bother using the Large Page logic if the Large Page Buffer
+        * is larger than our Page Size Buffer.
+        */
+       if (fl_large_pg <= fl_small_pg)
+               fl_large_pg = 0;
+
+       /* The Page Size Buffer must be exactly equal to our Page Size and the
+        * Large Page Size Buffer should be 0 (per above) or a power of 2.
+        */
+       if (fl_small_pg != PAGE_SIZE ||
+           (fl_large_pg & (fl_large_pg - 1)) != 0) {
                dev_err(adapter->pdev_dev, "bad SGE FL buffer sizes [%d, %d]\n",
-                       fl0, fl1);
+                       fl_small_pg, fl_large_pg);
                return -EINVAL;
        }
        if ((sge_params->sge_control & RXPKTCPLMODE_F) !=
@@ -2642,8 +2653,8 @@ int t4vf_sge_init(struct adapter *adapter)
        /*
         * Now translate the adapter parameters into our internal forms.
         */
-       if (fl1)
-               s->fl_pg_order = ilog2(fl1) - PAGE_SHIFT;
+       if (fl_large_pg)
+               s->fl_pg_order = ilog2(fl_large_pg) - PAGE_SHIFT;
        s->stat_len = ((sge_params->sge_control & EGRSTATUSPAGESIZE_F)
                        ? 128 : 64);
        s->pktshift = PKTSHIFT_G(sge_params->sge_control);
index a0ef97e7f3c93597ef2d90e7665f8bd271452a0d..ff7a70ffafc65f22fd3ef51416b6487d74f37824 100644 (file)
@@ -2092,6 +2092,10 @@ static int hclge_get_autoneg(struct hnae3_handle *handle)
 {
        struct hclge_vport *vport = hclge_get_vport(handle);
        struct hclge_dev *hdev = vport->back;
+       struct phy_device *phydev = hdev->hw.mac.phydev;
+
+       if (phydev)
+               return phydev->autoneg;
 
        hclge_query_autoneg_result(hdev);
 
index 186772493711e2e7a567079cf9d8c31ff0e605da..d1e4dcec5db27a68617796e4aa5ab2c39a6d5f91 100644 (file)
@@ -1060,6 +1060,8 @@ hns3_nic_get_stats64(struct net_device *netdev, struct rtnl_link_stats64 *stats)
        u64 rx_bytes = 0;
        u64 tx_pkts = 0;
        u64 rx_pkts = 0;
+       u64 tx_drop = 0;
+       u64 rx_drop = 0;
 
        for (idx = 0; idx < queue_num; idx++) {
                /* fetch the tx stats */
@@ -1068,6 +1070,8 @@ hns3_nic_get_stats64(struct net_device *netdev, struct rtnl_link_stats64 *stats)
                        start = u64_stats_fetch_begin_irq(&ring->syncp);
                        tx_bytes += ring->stats.tx_bytes;
                        tx_pkts += ring->stats.tx_pkts;
+                       tx_drop += ring->stats.tx_busy;
+                       tx_drop += ring->stats.sw_err_cnt;
                } while (u64_stats_fetch_retry_irq(&ring->syncp, start));
 
                /* fetch the rx stats */
@@ -1076,6 +1080,9 @@ hns3_nic_get_stats64(struct net_device *netdev, struct rtnl_link_stats64 *stats)
                        start = u64_stats_fetch_begin_irq(&ring->syncp);
                        rx_bytes += ring->stats.rx_bytes;
                        rx_pkts += ring->stats.rx_pkts;
+                       rx_drop += ring->stats.non_vld_descs;
+                       rx_drop += ring->stats.err_pkt_len;
+                       rx_drop += ring->stats.l2_err;
                } while (u64_stats_fetch_retry_irq(&ring->syncp, start));
        }
 
@@ -1091,8 +1098,8 @@ hns3_nic_get_stats64(struct net_device *netdev, struct rtnl_link_stats64 *stats)
        stats->rx_missed_errors = netdev->stats.rx_missed_errors;
 
        stats->tx_errors = netdev->stats.tx_errors;
-       stats->rx_dropped = netdev->stats.rx_dropped;
-       stats->tx_dropped = netdev->stats.tx_dropped;
+       stats->rx_dropped = rx_drop + netdev->stats.rx_dropped;
+       stats->tx_dropped = tx_drop + netdev->stats.tx_dropped;
        stats->collisions = netdev->stats.collisions;
        stats->rx_over_errors = netdev->stats.rx_over_errors;
        stats->rx_frame_errors = netdev->stats.rx_frame_errors;
@@ -1306,6 +1313,8 @@ static int hns3_nic_change_mtu(struct net_device *netdev, int new_mtu)
                return ret;
        }
 
+       netdev->mtu = new_mtu;
+
        /* if the netdev was running earlier, bring it up again */
        if (if_running && hns3_nic_net_open(netdev))
                ret = -EINVAL;
@@ -2687,8 +2696,12 @@ static int hns3_uninit_all_ring(struct hns3_nic_priv *priv)
                        h->ae_algo->ops->reset_queue(h, i);
 
                hns3_fini_ring(priv->ring_data[i].ring);
+               devm_kfree(priv->dev, priv->ring_data[i].ring);
                hns3_fini_ring(priv->ring_data[i + h->kinfo.num_tqps].ring);
+               devm_kfree(priv->dev,
+                          priv->ring_data[i + h->kinfo.num_tqps].ring);
        }
+       devm_kfree(priv->dev, priv->ring_data);
 
        return 0;
 }
index e590d96e434a197761f63135823cb350f59268e3..a64a5a413d4d025f4c4a638236f46645cb84cea4 100644 (file)
@@ -22,7 +22,8 @@ struct hns3_stats {
 #define HNS3_TQP_STAT(_string, _member)        {                       \
        .stats_string = _string,                                \
        .stats_size = FIELD_SIZEOF(struct ring_stats, _member), \
-       .stats_offset = offsetof(struct hns3_enet_ring, stats), \
+       .stats_offset = offsetof(struct hns3_enet_ring, stats) +\
+                       offsetof(struct ring_stats, _member),   \
 }                                                              \
 
 static const struct hns3_stats hns3_txq_stats[] = {
@@ -189,13 +190,13 @@ static u64 *hns3_get_stats_tqps(struct hnae3_handle *handle, u64 *data)
        struct hnae3_knic_private_info *kinfo = &handle->kinfo;
        struct hns3_enet_ring *ring;
        u8 *stat;
-       u32 i;
+       int i, j;
 
        /* get stats for Tx */
        for (i = 0; i < kinfo->num_tqps; i++) {
                ring = nic_priv->ring_data[i].ring;
-               for (i = 0; i < HNS3_TXQ_STATS_COUNT; i++) {
-                       stat = (u8 *)ring + hns3_txq_stats[i].stats_offset;
+               for (j = 0; j < HNS3_TXQ_STATS_COUNT; j++) {
+                       stat = (u8 *)ring + hns3_txq_stats[j].stats_offset;
                        *data++ = *(u64 *)stat;
                }
        }
@@ -203,8 +204,8 @@ static u64 *hns3_get_stats_tqps(struct hnae3_handle *handle, u64 *data)
        /* get stats for Rx */
        for (i = 0; i < kinfo->num_tqps; i++) {
                ring = nic_priv->ring_data[i + kinfo->num_tqps].ring;
-               for (i = 0; i < HNS3_RXQ_STATS_COUNT; i++) {
-                       stat = (u8 *)ring + hns3_rxq_stats[i].stats_offset;
+               for (j = 0; j < HNS3_RXQ_STATS_COUNT; j++) {
+                       stat = (u8 *)ring + hns3_rxq_stats[j].stats_offset;
                        *data++ = *(u64 *)stat;
                }
        }
index 3b0db01ead1f428a0bb16f6461bebed369be9289..3ae02b0620bc9656131fd2ba17ba2322a2fe7e6a 100644 (file)
@@ -2209,6 +2209,12 @@ static irqreturn_t ibmvnic_interrupt_rx(int irq, void *instance)
        struct ibmvnic_sub_crq_queue *scrq = instance;
        struct ibmvnic_adapter *adapter = scrq->adapter;
 
+       /* When booting a kdump kernel we can hit pending interrupts
+        * prior to completing driver initialization.
+        */
+       if (unlikely(adapter->state != VNIC_OPEN))
+               return IRQ_NONE;
+
        adapter->rx_stats_buffers[scrq->scrq_num].interrupts++;
 
        if (napi_schedule_prep(&adapter->napi[scrq->scrq_num])) {
index 1ccad6f30ebf485666338a849a262468620acb0c..4eb6ff60e8fca54a4883fdcabeb0ff6f8e5a87a9 100644 (file)
@@ -1775,7 +1775,11 @@ static void i40evf_disable_vf(struct i40evf_adapter *adapter)
 
        adapter->flags |= I40EVF_FLAG_PF_COMMS_FAILED;
 
-       if (netif_running(adapter->netdev)) {
+       /* We don't use netif_running() because it may be true prior to
+        * ndo_open() returning, so we can't assume it means all our open
+        * tasks have finished, since we're not holding the rtnl_lock here.
+        */
+       if (adapter->state == __I40EVF_RUNNING) {
                set_bit(__I40E_VSI_DOWN, adapter->vsi.state);
                netif_carrier_off(adapter->netdev);
                netif_tx_disable(adapter->netdev);
@@ -1833,6 +1837,7 @@ static void i40evf_reset_task(struct work_struct *work)
        struct i40evf_mac_filter *f;
        u32 reg_val;
        int i = 0, err;
+       bool running;
 
        while (test_and_set_bit(__I40EVF_IN_CLIENT_TASK,
                                &adapter->crit_section))
@@ -1892,7 +1897,13 @@ static void i40evf_reset_task(struct work_struct *work)
        }
 
 continue_reset:
-       if (netif_running(netdev)) {
+       /* We don't use netif_running() because it may be true prior to
+        * ndo_open() returning, so we can't assume it means all our open
+        * tasks have finished, since we're not holding the rtnl_lock here.
+        */
+       running = (adapter->state == __I40EVF_RUNNING);
+
+       if (running) {
                netif_carrier_off(netdev);
                netif_tx_stop_all_queues(netdev);
                adapter->link_up = false;
@@ -1936,7 +1947,10 @@ continue_reset:
 
        mod_timer(&adapter->watchdog_timer, jiffies + 2);
 
-       if (netif_running(adapter->netdev)) {
+       /* We were running when the reset started, so we need to restore some
+        * state here.
+        */
+       if (running) {
                /* allocate transmit descriptors */
                err = i40evf_setup_all_tx_resources(adapter);
                if (err)
index 1145cde2274a4cb778ba816716c3c55a5850d71e..b12e3a4f94397cc28c92cf85dffdb16966abdd12 100644 (file)
@@ -5087,7 +5087,7 @@ static int sky2_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
        INIT_WORK(&hw->restart_work, sky2_restart);
 
        pci_set_drvdata(pdev, hw);
-       pdev->d3_delay = 150;
+       pdev->d3_delay = 200;
 
        return 0;
 
index 5f41dc92aa6848aafd3e3a4b7a735ad33da984dd..752a72499b4f16ebb22c126c258ca35a0f19619d 100644 (file)
@@ -156,57 +156,63 @@ static int mlx4_en_dcbnl_getnumtcs(struct net_device *netdev, int tcid, u8 *num)
 static u8 mlx4_en_dcbnl_set_all(struct net_device *netdev)
 {
        struct mlx4_en_priv *priv = netdev_priv(netdev);
+       struct mlx4_en_port_profile *prof = priv->prof;
        struct mlx4_en_dev *mdev = priv->mdev;
+       u8 tx_pause, tx_ppp, rx_pause, rx_ppp;
 
        if (!(priv->dcbx_cap & DCB_CAP_DCBX_VER_CEE))
                return 1;
 
        if (priv->cee_config.pfc_state) {
                int tc;
+               rx_ppp = prof->rx_ppp;
+               tx_ppp = prof->tx_ppp;
 
-               priv->prof->rx_pause = 0;
-               priv->prof->tx_pause = 0;
                for (tc = 0; tc < CEE_DCBX_MAX_PRIO; tc++) {
                        u8 tc_mask = 1 << tc;
 
                        switch (priv->cee_config.dcb_pfc[tc]) {
                        case pfc_disabled:
-                               priv->prof->tx_ppp &= ~tc_mask;
-                               priv->prof->rx_ppp &= ~tc_mask;
+                               tx_ppp &= ~tc_mask;
+                               rx_ppp &= ~tc_mask;
                                break;
                        case pfc_enabled_full:
-                               priv->prof->tx_ppp |= tc_mask;
-                               priv->prof->rx_ppp |= tc_mask;
+                               tx_ppp |= tc_mask;
+                               rx_ppp |= tc_mask;
                                break;
                        case pfc_enabled_tx:
-                               priv->prof->tx_ppp |= tc_mask;
-                               priv->prof->rx_ppp &= ~tc_mask;
+                               tx_ppp |= tc_mask;
+                               rx_ppp &= ~tc_mask;
                                break;
                        case pfc_enabled_rx:
-                               priv->prof->tx_ppp &= ~tc_mask;
-                               priv->prof->rx_ppp |= tc_mask;
+                               tx_ppp &= ~tc_mask;
+                               rx_ppp |= tc_mask;
                                break;
                        default:
                                break;
                        }
                }
-               en_dbg(DRV, priv, "Set pfc on\n");
+               rx_pause = !!(rx_ppp || tx_ppp) ? 0 : prof->rx_pause;
+               tx_pause = !!(rx_ppp || tx_ppp) ? 0 : prof->tx_pause;
        } else {
-               priv->prof->rx_pause = 1;
-               priv->prof->tx_pause = 1;
-               en_dbg(DRV, priv, "Set pfc off\n");
+               rx_ppp = 0;
+               tx_ppp = 0;
+               rx_pause = prof->rx_pause;
+               tx_pause = prof->tx_pause;
        }
 
        if (mlx4_SET_PORT_general(mdev->dev, priv->port,
                                  priv->rx_skb_size + ETH_FCS_LEN,
-                                 priv->prof->tx_pause,
-                                 priv->prof->tx_ppp,
-                                 priv->prof->rx_pause,
-                                 priv->prof->rx_ppp)) {
+                                 tx_pause, tx_ppp, rx_pause, rx_ppp)) {
                en_err(priv, "Failed setting pause params\n");
                return 1;
        }
 
+       prof->tx_ppp = tx_ppp;
+       prof->rx_ppp = rx_ppp;
+       prof->tx_pause = tx_pause;
+       prof->rx_pause = rx_pause;
+
        return 0;
 }
 
@@ -310,6 +316,7 @@ static int mlx4_en_ets_validate(struct mlx4_en_priv *priv, struct ieee_ets *ets)
                }
 
                switch (ets->tc_tsa[i]) {
+               case IEEE_8021QAZ_TSA_VENDOR:
                case IEEE_8021QAZ_TSA_STRICT:
                        break;
                case IEEE_8021QAZ_TSA_ETS:
@@ -347,6 +354,10 @@ static int mlx4_en_config_port_scheduler(struct mlx4_en_priv *priv,
        /* higher TC means higher priority => lower pg */
        for (i = IEEE_8021QAZ_MAX_TCS - 1; i >= 0; i--) {
                switch (ets->tc_tsa[i]) {
+               case IEEE_8021QAZ_TSA_VENDOR:
+                       pg[i] = MLX4_EN_TC_VENDOR;
+                       tc_tx_bw[i] = MLX4_EN_BW_MAX;
+                       break;
                case IEEE_8021QAZ_TSA_STRICT:
                        pg[i] = num_strict++;
                        tc_tx_bw[i] = MLX4_EN_BW_MAX;
@@ -403,6 +414,7 @@ static int mlx4_en_dcbnl_ieee_setpfc(struct net_device *dev,
        struct mlx4_en_priv *priv = netdev_priv(dev);
        struct mlx4_en_port_profile *prof = priv->prof;
        struct mlx4_en_dev *mdev = priv->mdev;
+       u32 tx_pause, tx_ppp, rx_pause, rx_ppp;
        int err;
 
        en_dbg(DRV, priv, "cap: 0x%x en: 0x%x mbc: 0x%x delay: %d\n",
@@ -411,23 +423,26 @@ static int mlx4_en_dcbnl_ieee_setpfc(struct net_device *dev,
                        pfc->mbc,
                        pfc->delay);
 
-       prof->rx_pause = !pfc->pfc_en;
-       prof->tx_pause = !pfc->pfc_en;
-       prof->rx_ppp = pfc->pfc_en;
-       prof->tx_ppp = pfc->pfc_en;
+       rx_pause = prof->rx_pause && !pfc->pfc_en;
+       tx_pause = prof->tx_pause && !pfc->pfc_en;
+       rx_ppp = pfc->pfc_en;
+       tx_ppp = pfc->pfc_en;
 
        err = mlx4_SET_PORT_general(mdev->dev, priv->port,
                                    priv->rx_skb_size + ETH_FCS_LEN,
-                                   prof->tx_pause,
-                                   prof->tx_ppp,
-                                   prof->rx_pause,
-                                   prof->rx_ppp);
-       if (err)
+                                   tx_pause, tx_ppp, rx_pause, rx_ppp);
+       if (err) {
                en_err(priv, "Failed setting pause params\n");
-       else
-               mlx4_en_update_pfc_stats_bitmap(mdev->dev, &priv->stats_bitmap,
-                                               prof->rx_ppp, prof->rx_pause,
-                                               prof->tx_ppp, prof->tx_pause);
+               return err;
+       }
+
+       mlx4_en_update_pfc_stats_bitmap(mdev->dev, &priv->stats_bitmap,
+                                       rx_ppp, rx_pause, tx_ppp, tx_pause);
+
+       prof->tx_ppp = tx_ppp;
+       prof->rx_ppp = rx_ppp;
+       prof->rx_pause = rx_pause;
+       prof->tx_pause = tx_pause;
 
        return err;
 }
index 3d4e4a5d00d1c5f81267c4a4a9675bc667709211..67f74fcb265e77ea800d2e24aa27b0d3f8aa5322 100644 (file)
@@ -1046,27 +1046,32 @@ static int mlx4_en_set_pauseparam(struct net_device *dev,
 {
        struct mlx4_en_priv *priv = netdev_priv(dev);
        struct mlx4_en_dev *mdev = priv->mdev;
+       u8 tx_pause, tx_ppp, rx_pause, rx_ppp;
        int err;
 
        if (pause->autoneg)
                return -EINVAL;
 
-       priv->prof->tx_pause = pause->tx_pause != 0;
-       priv->prof->rx_pause = pause->rx_pause != 0;
+       tx_pause = !!(pause->tx_pause);
+       rx_pause = !!(pause->rx_pause);
+       rx_ppp = priv->prof->rx_ppp && !(tx_pause || rx_pause);
+       tx_ppp = priv->prof->tx_ppp && !(tx_pause || rx_pause);
+
        err = mlx4_SET_PORT_general(mdev->dev, priv->port,
                                    priv->rx_skb_size + ETH_FCS_LEN,
-                                   priv->prof->tx_pause,
-                                   priv->prof->tx_ppp,
-                                   priv->prof->rx_pause,
-                                   priv->prof->rx_ppp);
-       if (err)
-               en_err(priv, "Failed setting pause params\n");
-       else
-               mlx4_en_update_pfc_stats_bitmap(mdev->dev, &priv->stats_bitmap,
-                                               priv->prof->rx_ppp,
-                                               priv->prof->rx_pause,
-                                               priv->prof->tx_ppp,
-                                               priv->prof->tx_pause);
+                                   tx_pause, tx_ppp, rx_pause, rx_ppp);
+       if (err) {
+               en_err(priv, "Failed setting pause params, err = %d\n", err);
+               return err;
+       }
+
+       mlx4_en_update_pfc_stats_bitmap(mdev->dev, &priv->stats_bitmap,
+                                       rx_ppp, rx_pause, tx_ppp, tx_pause);
+
+       priv->prof->tx_pause = tx_pause;
+       priv->prof->rx_pause = rx_pause;
+       priv->prof->tx_ppp = tx_ppp;
+       priv->prof->rx_ppp = rx_ppp;
 
        return err;
 }
index 686e18de9a97b3e2ef96d9daa405ba1177546228..6b2f7122b3abd403fadfaec7dcfbd7fd28839b95 100644 (file)
@@ -163,9 +163,9 @@ static void mlx4_en_get_profile(struct mlx4_en_dev *mdev)
                params->udp_rss = 0;
        }
        for (i = 1; i <= MLX4_MAX_PORTS; i++) {
-               params->prof[i].rx_pause = 1;
+               params->prof[i].rx_pause = !(pfcrx || pfctx);
                params->prof[i].rx_ppp = pfcrx;
-               params->prof[i].tx_pause = 1;
+               params->prof[i].tx_pause = !(pfcrx || pfctx);
                params->prof[i].tx_ppp = pfctx;
                params->prof[i].tx_ring_size = MLX4_EN_DEF_TX_RING_SIZE;
                params->prof[i].rx_ring_size = MLX4_EN_DEF_RX_RING_SIZE;
index 9c218f1cfc6caf50aca61c853422f8c1767a75c8..c097eef41a9c82dd19a493848f2ee5f1061c2080 100644 (file)
@@ -3335,6 +3335,13 @@ int mlx4_en_init_netdev(struct mlx4_en_dev *mdev, int port,
        priv->msg_enable = MLX4_EN_MSG_LEVEL;
 #ifdef CONFIG_MLX4_EN_DCB
        if (!mlx4_is_slave(priv->mdev->dev)) {
+               u8 prio;
+
+               for (prio = 0; prio < IEEE_8021QAZ_MAX_TCS; ++prio) {
+                       priv->ets.prio_tc[prio] = prio;
+                       priv->ets.tc_tsa[prio]  = IEEE_8021QAZ_TSA_VENDOR;
+               }
+
                priv->dcbx_cap = DCB_CAP_DCBX_VER_CEE | DCB_CAP_DCBX_HOST |
                        DCB_CAP_DCBX_VER_IEEE;
                priv->flags |= MLX4_EN_DCB_ENABLED;
index fdb3ad0cbe5427c450ef4695d605402f3d8e7148..2c1a5ff6acfaf1c0f8b6b521c0c2da2220fdecb9 100644 (file)
@@ -476,6 +476,7 @@ struct mlx4_en_frag_info {
 #define MLX4_EN_BW_MIN 1
 #define MLX4_EN_BW_MAX 100 /* Utilize 100% of the line */
 
+#define MLX4_EN_TC_VENDOR 0
 #define MLX4_EN_TC_ETS 7
 
 enum dcb_pfc_type {
index fabb533797275f75965f037bf14f5a63d5c4b066..a069fcc823c30f765d65d5b638b4449b2e41acd1 100644 (file)
@@ -5089,6 +5089,7 @@ static void rem_slave_fs_rule(struct mlx4_dev *dev, int slave)
                                                 &tracker->res_tree[RES_FS_RULE]);
                                        list_del(&fs_rule->com.list);
                                        spin_unlock_irq(mlx4_tlock(dev));
+                                       kfree(fs_rule->mirr_mbox);
                                        kfree(fs_rule);
                                        state = 0;
                                        break;
index a863572882b2b2949b530b03a76e9246ab5bdd1b..225b2ad3e15f47822b51e3c36542649d3f1f3181 100644 (file)
@@ -2718,6 +2718,9 @@ int mlx5e_open(struct net_device *netdev)
                mlx5_set_port_admin_status(priv->mdev, MLX5_PORT_UP);
        mutex_unlock(&priv->state_lock);
 
+       if (mlx5e_vxlan_allowed(priv->mdev))
+               udp_tunnel_get_rx_info(netdev);
+
        return err;
 }
 
@@ -4276,13 +4279,6 @@ static void mlx5e_nic_enable(struct mlx5e_priv *priv)
        if (netdev->reg_state != NETREG_REGISTERED)
                return;
 
-       /* Device already registered: sync netdev system state */
-       if (mlx5e_vxlan_allowed(mdev)) {
-               rtnl_lock();
-               udp_tunnel_get_rx_info(netdev);
-               rtnl_unlock();
-       }
-
        queue_work(priv->wq, &priv->set_rx_mode_work);
 
        rtnl_lock();
index 45e03c427faf93970a0b35bb17e7f6a0d46fa537..5ffd1db4e797693b57aff20ca7c374f14d9a615b 100644 (file)
 #include "en_tc.h"
 #include "fs_core.h"
 
+#define MLX5E_REP_PARAMS_LOG_SQ_SIZE \
+       max(0x6, MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE)
+#define MLX5E_REP_PARAMS_LOG_RQ_SIZE \
+       max(0x6, MLX5E_PARAMS_MINIMUM_LOG_RQ_SIZE)
+
 static const char mlx5e_rep_driver_name[] = "mlx5e_rep";
 
 static void mlx5e_rep_get_drvinfo(struct net_device *dev,
@@ -230,7 +235,7 @@ void mlx5e_remove_sqs_fwd_rules(struct mlx5e_priv *priv)
 static void mlx5e_rep_neigh_update_init_interval(struct mlx5e_rep_priv *rpriv)
 {
 #if IS_ENABLED(CONFIG_IPV6)
-       unsigned long ipv6_interval = NEIGH_VAR(&ipv6_stub->nd_tbl->parms,
+       unsigned long ipv6_interval = NEIGH_VAR(&nd_tbl.parms,
                                                DELAY_PROBE_TIME);
 #else
        unsigned long ipv6_interval = ~0UL;
@@ -366,7 +371,7 @@ static int mlx5e_rep_netevent_event(struct notifier_block *nb,
        case NETEVENT_NEIGH_UPDATE:
                n = ptr;
 #if IS_ENABLED(CONFIG_IPV6)
-               if (n->tbl != ipv6_stub->nd_tbl && n->tbl != &arp_tbl)
+               if (n->tbl != &nd_tbl && n->tbl != &arp_tbl)
 #else
                if (n->tbl != &arp_tbl)
 #endif
@@ -414,7 +419,7 @@ static int mlx5e_rep_netevent_event(struct notifier_block *nb,
                 * done per device delay prob time parameter.
                 */
 #if IS_ENABLED(CONFIG_IPV6)
-               if (!p->dev || (p->tbl != ipv6_stub->nd_tbl && p->tbl != &arp_tbl))
+               if (!p->dev || (p->tbl != &nd_tbl && p->tbl != &arp_tbl))
 #else
                if (!p->dev || p->tbl != &arp_tbl)
 #endif
@@ -610,7 +615,6 @@ static int mlx5e_rep_open(struct net_device *dev)
        struct mlx5e_priv *priv = netdev_priv(dev);
        struct mlx5e_rep_priv *rpriv = priv->ppriv;
        struct mlx5_eswitch_rep *rep = rpriv->rep;
-       struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
        int err;
 
        mutex_lock(&priv->state_lock);
@@ -618,8 +622,9 @@ static int mlx5e_rep_open(struct net_device *dev)
        if (err)
                goto unlock;
 
-       if (!mlx5_eswitch_set_vport_state(esw, rep->vport,
-                                         MLX5_ESW_VPORT_ADMIN_STATE_UP))
+       if (!mlx5_modify_vport_admin_state(priv->mdev,
+                       MLX5_QUERY_VPORT_STATE_IN_OP_MOD_ESW_VPORT,
+                       rep->vport, MLX5_ESW_VPORT_ADMIN_STATE_UP))
                netif_carrier_on(dev);
 
 unlock:
@@ -632,11 +637,12 @@ static int mlx5e_rep_close(struct net_device *dev)
        struct mlx5e_priv *priv = netdev_priv(dev);
        struct mlx5e_rep_priv *rpriv = priv->ppriv;
        struct mlx5_eswitch_rep *rep = rpriv->rep;
-       struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
        int ret;
 
        mutex_lock(&priv->state_lock);
-       (void)mlx5_eswitch_set_vport_state(esw, rep->vport, MLX5_ESW_VPORT_ADMIN_STATE_DOWN);
+       mlx5_modify_vport_admin_state(priv->mdev,
+                       MLX5_QUERY_VPORT_STATE_IN_OP_MOD_ESW_VPORT,
+                       rep->vport, MLX5_ESW_VPORT_ADMIN_STATE_DOWN);
        ret = mlx5e_close_locked(dev);
        mutex_unlock(&priv->state_lock);
        return ret;
@@ -797,9 +803,9 @@ static void mlx5e_build_rep_params(struct mlx5_core_dev *mdev,
                                         MLX5_CQ_PERIOD_MODE_START_FROM_CQE :
                                         MLX5_CQ_PERIOD_MODE_START_FROM_EQE;
 
-       params->log_sq_size = MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE;
+       params->log_sq_size = MLX5E_REP_PARAMS_LOG_SQ_SIZE;
        params->rq_wq_type  = MLX5_WQ_TYPE_LINKED_LIST;
-       params->log_rq_size = MLX5E_PARAMS_MINIMUM_LOG_RQ_SIZE;
+       params->log_rq_size = MLX5E_REP_PARAMS_LOG_RQ_SIZE;
 
        params->rx_am_enabled = MLX5_CAP_GEN(mdev, cq_moderation);
        mlx5e_set_rx_cq_mode_params(params, cq_period_mode);
index 9ba1f72060aae4c57a55c9d7a4c049e1a6f8a69b..42bab73a9f408b82b7d4bef1d6dde7c627180927 100644 (file)
@@ -484,7 +484,7 @@ void mlx5e_tc_update_neigh_used_value(struct mlx5e_neigh_hash_entry *nhe)
                tbl = &arp_tbl;
 #if IS_ENABLED(CONFIG_IPV6)
        else if (m_neigh->family == AF_INET6)
-               tbl = ipv6_stub->nd_tbl;
+               tbl = &nd_tbl;
 #endif
        else
                return;
@@ -2091,19 +2091,19 @@ int mlx5e_configure_flower(struct mlx5e_priv *priv,
        if (err != -EAGAIN)
                flow->flags |= MLX5E_TC_FLOW_OFFLOADED;
 
+       if (!(flow->flags & MLX5E_TC_FLOW_ESWITCH) ||
+           !(flow->esw_attr->action & MLX5_FLOW_CONTEXT_ACTION_ENCAP))
+               kvfree(parse_attr);
+
        err = rhashtable_insert_fast(&tc->ht, &flow->node,
                                     tc->ht_params);
-       if (err)
-               goto err_del_rule;
+       if (err) {
+               mlx5e_tc_del_flow(priv, flow);
+               kfree(flow);
+       }
 
-       if (flow->flags & MLX5E_TC_FLOW_ESWITCH &&
-           !(flow->esw_attr->action & MLX5_FLOW_CONTEXT_ACTION_ENCAP))
-               kvfree(parse_attr);
        return err;
 
-err_del_rule:
-       mlx5e_tc_del_flow(priv, flow);
-
 err_free:
        kvfree(parse_attr);
        kfree(flow);
index a1296a62497dab01e43d6293d902548af8b6195e..71153c0f16054a2e987972b067c0070a54863e88 100644 (file)
@@ -36,6 +36,9 @@
 #include <linux/mlx5/vport.h>
 #include "mlx5_core.h"
 
+/* Mutex to hold while enabling or disabling RoCE */
+static DEFINE_MUTEX(mlx5_roce_en_lock);
+
 static int _mlx5_query_vport_state(struct mlx5_core_dev *mdev, u8 opmod,
                                   u16 vport, u32 *out, int outlen)
 {
@@ -998,17 +1001,35 @@ static int mlx5_nic_vport_update_roce_state(struct mlx5_core_dev *mdev,
 
 int mlx5_nic_vport_enable_roce(struct mlx5_core_dev *mdev)
 {
-       if (atomic_inc_return(&mdev->roce.roce_en) != 1)
-               return 0;
-       return mlx5_nic_vport_update_roce_state(mdev, MLX5_VPORT_ROCE_ENABLED);
+       int err = 0;
+
+       mutex_lock(&mlx5_roce_en_lock);
+       if (!mdev->roce.roce_en)
+               err = mlx5_nic_vport_update_roce_state(mdev, MLX5_VPORT_ROCE_ENABLED);
+
+       if (!err)
+               mdev->roce.roce_en++;
+       mutex_unlock(&mlx5_roce_en_lock);
+
+       return err;
 }
 EXPORT_SYMBOL_GPL(mlx5_nic_vport_enable_roce);
 
 int mlx5_nic_vport_disable_roce(struct mlx5_core_dev *mdev)
 {
-       if (atomic_dec_return(&mdev->roce.roce_en) != 0)
-               return 0;
-       return mlx5_nic_vport_update_roce_state(mdev, MLX5_VPORT_ROCE_DISABLED);
+       int err = 0;
+
+       mutex_lock(&mlx5_roce_en_lock);
+       if (mdev->roce.roce_en) {
+               mdev->roce.roce_en--;
+               if (mdev->roce.roce_en == 0)
+                       err = mlx5_nic_vport_update_roce_state(mdev, MLX5_VPORT_ROCE_DISABLED);
+
+               if (err)
+                       mdev->roce.roce_en++;
+       }
+       mutex_unlock(&mlx5_roce_en_lock);
+       return err;
 }
 EXPORT_SYMBOL_GPL(mlx5_nic_vport_disable_roce);
 
index 37364555c42b3fe60438713c4b7ffd1ca3d4a9a6..f88ff3f4b6612912d33b75b7570641b8e98e8f67 100644 (file)
 /* CPP address to retrieve the data from */
 #define NSP_BUFFER             0x10
 #define   NSP_BUFFER_CPP       GENMASK_ULL(63, 40)
-#define   NSP_BUFFER_PCIE      GENMASK_ULL(39, 38)
-#define   NSP_BUFFER_ADDRESS   GENMASK_ULL(37, 0)
+#define   NSP_BUFFER_ADDRESS   GENMASK_ULL(39, 0)
 
 #define NSP_DFLT_BUFFER                0x18
+#define   NSP_DFLT_BUFFER_CPP  GENMASK_ULL(63, 40)
+#define   NSP_DFLT_BUFFER_ADDRESS      GENMASK_ULL(39, 0)
 
 #define NSP_DFLT_BUFFER_CONFIG 0x20
 #define   NSP_DFLT_BUFFER_SIZE_MB      GENMASK_ULL(7, 0)
@@ -412,8 +413,8 @@ static int nfp_nsp_command_buf(struct nfp_nsp *nsp, u16 code, u32 option,
        if (err < 0)
                return err;
 
-       cpp_id = FIELD_GET(NSP_BUFFER_CPP, reg) << 8;
-       cpp_buf = FIELD_GET(NSP_BUFFER_ADDRESS, reg);
+       cpp_id = FIELD_GET(NSP_DFLT_BUFFER_CPP, reg) << 8;
+       cpp_buf = FIELD_GET(NSP_DFLT_BUFFER_ADDRESS, reg);
 
        if (in_buf && in_size) {
                err = nfp_cpp_write(cpp, cpp_id, cpp_buf, in_buf, in_size);
index 619a1b7281a0b76058e745696b4128c1ffcd18ca..db553d4e8d2298ca84b9f43e37350e665cee4b81 100644 (file)
@@ -8466,12 +8466,12 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
                goto err_out_msi_5;
        }
 
+       pci_set_drvdata(pdev, dev);
+
        rc = register_netdev(dev);
        if (rc < 0)
                goto err_out_cnt_6;
 
-       pci_set_drvdata(pdev, dev);
-
        netif_info(tp, probe, dev, "%s at 0x%p, %pM, XID %08x IRQ %d\n",
                   rtl_chip_infos[chipset].name, ioaddr, dev->dev_addr,
                   (u32)(RTL_R32(TxConfig) & 0x9cf0f8ff), pdev->irq);
index 6dde9a0cfe76ca3b09fcb355b26b75c3dad7c622..9b70a3af678e072cb29e498ea87e109290ceab20 100644 (file)
@@ -464,7 +464,6 @@ static int pptp_connect(struct socket *sock, struct sockaddr *uservaddr,
        po->chan.mtu = dst_mtu(&rt->dst);
        if (!po->chan.mtu)
                po->chan.mtu = PPP_MRU;
-       ip_rt_put(rt);
        po->chan.mtu -= PPTP_HEADER_OVERHEAD;
 
        po->chan.hdrlen = 2 + sizeof(struct pptp_gre_header);
index 23cd41c82210df286ffe3a0f0efe6d4d7aaec380..2a366554c503d6530589b544a8bfa26566e3a8e8 100644 (file)
@@ -1197,11 +1197,6 @@ static int team_port_add(struct team *team, struct net_device *port_dev)
                goto err_dev_open;
        }
 
-       netif_addr_lock_bh(dev);
-       dev_uc_sync_multiple(port_dev, dev);
-       dev_mc_sync_multiple(port_dev, dev);
-       netif_addr_unlock_bh(dev);
-
        err = vlan_vids_add_by_dev(port_dev, dev);
        if (err) {
                netdev_err(dev, "Failed to add vlan ids to device %s\n",
@@ -1241,6 +1236,11 @@ static int team_port_add(struct team *team, struct net_device *port_dev)
                goto err_option_port_add;
        }
 
+       netif_addr_lock_bh(dev);
+       dev_uc_sync_multiple(port_dev, dev);
+       dev_mc_sync_multiple(port_dev, dev);
+       netif_addr_unlock_bh(dev);
+
        port->index = -1;
        list_add_tail_rcu(&port->list, &team->port_list);
        team_port_enable(team, port);
@@ -1265,8 +1265,6 @@ err_enable_netpoll:
        vlan_vids_del_by_dev(port_dev, dev);
 
 err_vids_add:
-       dev_uc_unsync(port_dev, dev);
-       dev_mc_unsync(port_dev, dev);
        dev_close(port_dev);
 
 err_dev_open:
index a8dd1c7a08cbe8aafa51168288946b1a4641c33f..89d82c4ee8df072352afcc9c1ba796b80f105b71 100644 (file)
@@ -2863,8 +2863,7 @@ static int lan78xx_bind(struct lan78xx_net *dev, struct usb_interface *intf)
        if (ret < 0) {
                netdev_warn(dev->net,
                            "lan78xx_setup_irq_domain() failed : %d", ret);
-               kfree(pdata);
-               return ret;
+               goto out1;
        }
 
        dev->net->hard_header_len += TX_OVERHEAD;
@@ -2872,14 +2871,32 @@ static int lan78xx_bind(struct lan78xx_net *dev, struct usb_interface *intf)
 
        /* Init all registers */
        ret = lan78xx_reset(dev);
+       if (ret) {
+               netdev_warn(dev->net, "Registers INIT FAILED....");
+               goto out2;
+       }
 
        ret = lan78xx_mdio_init(dev);
+       if (ret) {
+               netdev_warn(dev->net, "MDIO INIT FAILED.....");
+               goto out2;
+       }
 
        dev->net->flags |= IFF_MULTICAST;
 
        pdata->wol = WAKE_MAGIC;
 
        return ret;
+
+out2:
+       lan78xx_remove_irq_domain(dev);
+
+out1:
+       netdev_warn(dev->net, "Bind routine FAILED");
+       cancel_work_sync(&pdata->set_multicast);
+       cancel_work_sync(&pdata->set_vlan);
+       kfree(pdata);
+       return ret;
 }
 
 static void lan78xx_unbind(struct lan78xx_net *dev, struct usb_interface *intf)
@@ -2891,6 +2908,8 @@ static void lan78xx_unbind(struct lan78xx_net *dev, struct usb_interface *intf)
        lan78xx_remove_mdio(dev);
 
        if (pdata) {
+               cancel_work_sync(&pdata->set_multicast);
+               cancel_work_sync(&pdata->set_vlan);
                netif_dbg(dev, ifdown, dev->net, "free pdata");
                kfree(pdata);
                pdata = NULL;
index 67ecf2425b889ca260c84f7eb18be9f79321e29f..5c6a8ef54aec55334c05608f2dc4b7c5939806b4 100644 (file)
@@ -579,12 +579,13 @@ static int vrf_finish_output(struct net *net, struct sock *sk, struct sk_buff *s
        if (!IS_ERR(neigh)) {
                sock_confirm_neigh(skb, neigh);
                ret = neigh_output(neigh, skb);
+               rcu_read_unlock_bh();
+               return ret;
        }
 
        rcu_read_unlock_bh();
 err:
-       if (unlikely(ret < 0))
-               vrf_tx_error(skb->dev, skb);
+       vrf_tx_error(skb->dev, skb);
        return ret;
 }
 
index ecc96312a370377b1d29bc6f4dd450062a6538b4..6fe0c6abe0d6eca885b9a2be028de258c90f01d8 100644 (file)
@@ -142,15 +142,25 @@ void rt2x00mac_tx(struct ieee80211_hw *hw,
        if (!rt2x00dev->ops->hw->set_rts_threshold &&
            (tx_info->control.rates[0].flags & (IEEE80211_TX_RC_USE_RTS_CTS |
                                                IEEE80211_TX_RC_USE_CTS_PROTECT))) {
-               if (rt2x00queue_available(queue) <= 1)
-                       goto exit_fail;
+               if (rt2x00queue_available(queue) <= 1) {
+                       /*
+                        * Recheck for full queue under lock to avoid race
+                        * conditions with rt2x00lib_txdone().
+                        */
+                       spin_lock(&queue->tx_lock);
+                       if (rt2x00queue_threshold(queue))
+                               rt2x00queue_pause_queue(queue);
+                       spin_unlock(&queue->tx_lock);
+
+                       goto exit_free_skb;
+               }
 
                if (rt2x00mac_tx_rts_cts(rt2x00dev, queue, skb))
-                       goto exit_fail;
+                       goto exit_free_skb;
        }
 
        if (unlikely(rt2x00queue_write_tx_frame(queue, skb, control->sta, false)))
-               goto exit_fail;
+               goto exit_free_skb;
 
        /*
         * Pausing queue has to be serialized with rt2x00lib_txdone(). Note
@@ -164,10 +174,6 @@ void rt2x00mac_tx(struct ieee80211_hw *hw,
 
        return;
 
- exit_fail:
-       spin_lock(&queue->tx_lock);
-       rt2x00queue_pause_queue(queue);
-       spin_unlock(&queue->tx_lock);
  exit_free_skb:
        ieee80211_free_txskb(hw, skb);
 }
index 6d02c660b4ab785db914889c9819691c84b9a372..037defd10b91800a5210c4f115584d30278d9e5f 100644 (file)
@@ -1200,8 +1200,7 @@ static void wl1251_op_bss_info_changed(struct ieee80211_hw *hw,
                WARN_ON(wl->bss_type != BSS_TYPE_STA_BSS);
 
                enable = bss_conf->arp_addr_cnt == 1 && bss_conf->assoc;
-               wl1251_acx_arp_ip_filter(wl, enable, addr);
-
+               ret = wl1251_acx_arp_ip_filter(wl, enable, addr);
                if (ret < 0)
                        goto out_sleep;
        }
index 7b75d9de55ab0d33939bf314a81ee01e26a6d1b1..c0080f6ab2f5b209eb8aeae003df6659f9124f11 100644 (file)
@@ -204,6 +204,10 @@ struct fcloop_lport {
        struct completion unreg_done;
 };
 
+struct fcloop_lport_priv {
+       struct fcloop_lport *lport;
+};
+
 struct fcloop_rport {
        struct nvme_fc_remote_port *remoteport;
        struct nvmet_fc_target_port *targetport;
@@ -370,6 +374,7 @@ fcloop_tgt_fcprqst_done_work(struct work_struct *work)
 
        spin_lock(&tfcp_req->reqlock);
        fcpreq = tfcp_req->fcpreq;
+       tfcp_req->fcpreq = NULL;
        spin_unlock(&tfcp_req->reqlock);
 
        if (tport->remoteport && fcpreq) {
@@ -611,11 +616,7 @@ fcloop_fcp_abort(struct nvme_fc_local_port *localport,
 
        if (!tfcp_req)
                /* abort has already been called */
-               return;
-
-       if (rport->targetport)
-               nvmet_fc_rcv_fcp_abort(rport->targetport,
-                                       &tfcp_req->tgt_fcp_req);
+               goto finish;
 
        /* break initiator/target relationship for io */
        spin_lock(&tfcp_req->reqlock);
@@ -623,6 +624,11 @@ fcloop_fcp_abort(struct nvme_fc_local_port *localport,
        tfcp_req->fcpreq = NULL;
        spin_unlock(&tfcp_req->reqlock);
 
+       if (rport->targetport)
+               nvmet_fc_rcv_fcp_abort(rport->targetport,
+                                       &tfcp_req->tgt_fcp_req);
+
+finish:
        /* post the aborted io completion */
        fcpreq->status = -ECANCELED;
        schedule_work(&inireq->iniwork);
@@ -657,7 +663,8 @@ fcloop_nport_get(struct fcloop_nport *nport)
 static void
 fcloop_localport_delete(struct nvme_fc_local_port *localport)
 {
-       struct fcloop_lport *lport = localport->private;
+       struct fcloop_lport_priv *lport_priv = localport->private;
+       struct fcloop_lport *lport = lport_priv->lport;
 
        /* release any threads waiting for the unreg to complete */
        complete(&lport->unreg_done);
@@ -697,7 +704,7 @@ static struct nvme_fc_port_template fctemplate = {
        .max_dif_sgl_segments   = FCLOOP_SGL_SEGS,
        .dma_boundary           = FCLOOP_DMABOUND_4G,
        /* sizes of additional private data for data structures */
-       .local_priv_sz          = sizeof(struct fcloop_lport),
+       .local_priv_sz          = sizeof(struct fcloop_lport_priv),
        .remote_priv_sz         = sizeof(struct fcloop_rport),
        .lsrqst_priv_sz         = sizeof(struct fcloop_lsreq),
        .fcprqst_priv_sz        = sizeof(struct fcloop_ini_fcpreq),
@@ -728,11 +735,17 @@ fcloop_create_local_port(struct device *dev, struct device_attribute *attr,
        struct fcloop_ctrl_options *opts;
        struct nvme_fc_local_port *localport;
        struct fcloop_lport *lport;
-       int ret;
+       struct fcloop_lport_priv *lport_priv;
+       unsigned long flags;
+       int ret = -ENOMEM;
+
+       lport = kzalloc(sizeof(*lport), GFP_KERNEL);
+       if (!lport)
+               return -ENOMEM;
 
        opts = kzalloc(sizeof(*opts), GFP_KERNEL);
        if (!opts)
-               return -ENOMEM;
+               goto out_free_lport;
 
        ret = fcloop_parse_options(opts, buf);
        if (ret)
@@ -752,23 +765,25 @@ fcloop_create_local_port(struct device *dev, struct device_attribute *attr,
 
        ret = nvme_fc_register_localport(&pinfo, &fctemplate, NULL, &localport);
        if (!ret) {
-               unsigned long flags;
-
                /* success */
-               lport = localport->private;
+               lport_priv = localport->private;
+               lport_priv->lport = lport;
+
                lport->localport = localport;
                INIT_LIST_HEAD(&lport->lport_list);
 
                spin_lock_irqsave(&fcloop_lock, flags);
                list_add_tail(&lport->lport_list, &fcloop_lports);
                spin_unlock_irqrestore(&fcloop_lock, flags);
-
-               /* mark all of the input buffer consumed */
-               ret = count;
        }
 
 out_free_opts:
        kfree(opts);
+out_free_lport:
+       /* free only if we're going to fail */
+       if (ret)
+               kfree(lport);
+
        return ret ? ret : count;
 }
 
@@ -790,6 +805,8 @@ __wait_localport_unreg(struct fcloop_lport *lport)
 
        wait_for_completion(&lport->unreg_done);
 
+       kfree(lport);
+
        return ret;
 }
 
index 0f3a02495aeb66aae84c31bab5d40278825946f7..beeb7cbb50155821375761bf297ebf69276284ce 100644 (file)
@@ -46,6 +46,9 @@
 #define BYT_TRIG_POS           BIT(25)
 #define BYT_TRIG_LVL           BIT(24)
 #define BYT_DEBOUNCE_EN                BIT(20)
+#define BYT_GLITCH_FILTER_EN   BIT(19)
+#define BYT_GLITCH_F_SLOW_CLK  BIT(17)
+#define BYT_GLITCH_F_FAST_CLK  BIT(16)
 #define BYT_PULL_STR_SHIFT     9
 #define BYT_PULL_STR_MASK      (3 << BYT_PULL_STR_SHIFT)
 #define BYT_PULL_STR_2K                (0 << BYT_PULL_STR_SHIFT)
@@ -1579,6 +1582,9 @@ static int byt_irq_type(struct irq_data *d, unsigned int type)
         */
        value &= ~(BYT_DIRECT_IRQ_EN | BYT_TRIG_POS | BYT_TRIG_NEG |
                   BYT_TRIG_LVL);
+       /* Enable glitch filtering */
+       value |= BYT_GLITCH_FILTER_EN | BYT_GLITCH_F_SLOW_CLK |
+                BYT_GLITCH_F_FAST_CLK;
 
        writel(value, reg);
 
index d51ebd1da65e77ab678d414ce633db39fa04d93e..9dc7590e07cbe97795406b9810544f95faa2ae2a 100644 (file)
@@ -785,6 +785,14 @@ static int charger_init_hw_regs(struct axp288_chrg_info *info)
        return 0;
 }
 
+static void axp288_charger_cancel_work(void *data)
+{
+       struct axp288_chrg_info *info = data;
+
+       cancel_work_sync(&info->otg.work);
+       cancel_work_sync(&info->cable.work);
+}
+
 static int axp288_charger_probe(struct platform_device *pdev)
 {
        int ret, i, pirq;
@@ -836,6 +844,11 @@ static int axp288_charger_probe(struct platform_device *pdev)
                return ret;
        }
 
+       /* Cancel our work on cleanup, register this before the notifiers */
+       ret = devm_add_action(dev, axp288_charger_cancel_work, info);
+       if (ret)
+               return ret;
+
        /* Register for extcon notification */
        INIT_WORK(&info->cable.work, axp288_charger_extcon_evt_worker);
        info->cable.nb[0].notifier_call = axp288_charger_handle_cable0_evt;
index 0e358d4b67384b1de1a374d7cae9d010ade8387f..8ff9dc3fe5bf06ea15a2321df4a4383d32414527 100644 (file)
@@ -137,13 +137,15 @@ static unsigned long ac100_clkout_recalc_rate(struct clk_hw *hw,
                div = (reg >> AC100_CLKOUT_PRE_DIV_SHIFT) &
                        ((1 << AC100_CLKOUT_PRE_DIV_WIDTH) - 1);
                prate = divider_recalc_rate(hw, prate, div,
-                                           ac100_clkout_prediv, 0);
+                                           ac100_clkout_prediv, 0,
+                                           AC100_CLKOUT_PRE_DIV_WIDTH);
        }
 
        div = (reg >> AC100_CLKOUT_DIV_SHIFT) &
                (BIT(AC100_CLKOUT_DIV_WIDTH) - 1);
        return divider_recalc_rate(hw, prate, div, NULL,
-                                  CLK_DIVIDER_POWER_OF_TWO);
+                                  CLK_DIVIDER_POWER_OF_TWO,
+                                  AC100_CLKOUT_DIV_WIDTH);
 }
 
 static long ac100_clkout_round_rate(struct clk_hw *hw, unsigned long rate,
index f8dc1601efd5f1eb51b4d776087d6ea20534d09e..bddbe2da528340b930e93c49134b9ab7b5c15067 100644 (file)
@@ -1696,6 +1696,15 @@ int iscsi_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *sc)
                 */
                switch (session->state) {
                case ISCSI_STATE_FAILED:
+                       /*
+                        * cmds should fail during shutdown, if the session
+                        * state is bad, allowing completion to happen
+                        */
+                       if (unlikely(system_state != SYSTEM_RUNNING)) {
+                               reason = FAILURE_SESSION_FAILED;
+                               sc->result = DID_NO_CONNECT << 16;
+                               break;
+                       }
                case ISCSI_STATE_IN_RECOVERY:
                        reason = FAILURE_SESSION_IN_RECOVERY;
                        sc->result = DID_IMM_RETRY << 16;
@@ -1980,6 +1989,19 @@ enum blk_eh_timer_return iscsi_eh_cmd_timed_out(struct scsi_cmnd *sc)
        }
 
        if (session->state != ISCSI_STATE_LOGGED_IN) {
+               /*
+                * During shutdown, if session is prematurely disconnected,
+                * recovery won't happen and there will be hung cmds. Not
+                * handling cmds would trigger EH, also bad in this case.
+                * Instead, handle cmd, allow completion to happen and let
+                * upper layer to deal with the result.
+                */
+               if (unlikely(system_state != SYSTEM_RUNNING)) {
+                       sc->result = DID_NO_CONNECT << 16;
+                       ISCSI_DBG_EH(session, "sc on shutdown, handled\n");
+                       rc = BLK_EH_HANDLED;
+                       goto done;
+               }
                /*
                 * We are probably in the middle of iscsi recovery so let
                 * that complete and handle the error.
@@ -2084,7 +2106,7 @@ done:
                task->last_timeout = jiffies;
        spin_unlock(&session->frwd_lock);
        ISCSI_DBG_EH(session, "return %s\n", rc == BLK_EH_RESET_TIMER ?
-                    "timer reset" : "nh");
+                    "timer reset" : "shutdown or nh");
        return rc;
 }
 EXPORT_SYMBOL_GPL(iscsi_eh_cmd_timed_out);
index 324d8d8c62decb18ad57a9c24781efd10792d8ce..e2ea389fbec3793203ed6346cb6edfff78b957e5 100644 (file)
@@ -293,6 +293,7 @@ static void sas_set_ex_phy(struct domain_device *dev, int phy_id, void *rsp)
        phy->phy->minimum_linkrate = dr->pmin_linkrate;
        phy->phy->maximum_linkrate = dr->pmax_linkrate;
        phy->phy->negotiated_linkrate = phy->linkrate;
+       phy->phy->enabled = (phy->linkrate != SAS_PHY_DISABLED);
 
  skip:
        if (new_phy)
@@ -686,7 +687,7 @@ int sas_smp_get_phy_events(struct sas_phy *phy)
        res = smp_execute_task(dev, req, RPEL_REQ_SIZE,
                                    resp, RPEL_RESP_SIZE);
 
-       if (!res)
+       if (res)
                goto out;
 
        phy->invalid_dword_count = scsi_to_u32(&resp[12]);
@@ -695,6 +696,7 @@ int sas_smp_get_phy_events(struct sas_phy *phy)
        phy->phy_reset_problem_count = scsi_to_u32(&resp[24]);
 
  out:
+       kfree(req);
        kfree(resp);
        return res;
 
index e518dadc81615fd4e981b9127e840b9c7adc2d2b..4beb4dd2bee84013d14635c84221c43826913b95 100644 (file)
@@ -6605,7 +6605,6 @@ static void megasas_detach_one(struct pci_dev *pdev)
        u32 pd_seq_map_sz;
 
        instance = pci_get_drvdata(pdev);
-       instance->unload = 1;
        host = instance->host;
        fusion = instance->ctrl_context;
 
@@ -6616,6 +6615,7 @@ static void megasas_detach_one(struct pci_dev *pdev)
        if (instance->fw_crash_state != UNAVAILABLE)
                megasas_free_host_crash_buffer(instance);
        scsi_remove_host(instance->host);
+       instance->unload = 1;
 
        if (megasas_wait_for_adapter_operational(instance))
                goto skip_firing_dcmds;
index ecc699a65bacb455061999f031465efe4be2b830..08945142b9f8ffd6a76096e2951930fdfee7b122 100644 (file)
@@ -168,7 +168,7 @@ static struct MR_LD_SPAN *MR_LdSpanPtrGet(u32 ld, u32 span,
 /*
  * This function will Populate Driver Map using firmware raid map
  */
-void MR_PopulateDrvRaidMap(struct megasas_instance *instance)
+static int MR_PopulateDrvRaidMap(struct megasas_instance *instance)
 {
        struct fusion_context *fusion = instance->ctrl_context;
        struct MR_FW_RAID_MAP_ALL     *fw_map_old    = NULL;
@@ -259,7 +259,7 @@ void MR_PopulateDrvRaidMap(struct megasas_instance *instance)
                ld_count = (u16)le16_to_cpu(fw_map_ext->ldCount);
                if (ld_count > MAX_LOGICAL_DRIVES_EXT) {
                        dev_dbg(&instance->pdev->dev, "megaraid_sas: LD count exposed in RAID map in not valid\n");
-                       return;
+                       return 1;
                }
 
                pDrvRaidMap->ldCount = (__le16)cpu_to_le16(ld_count);
@@ -285,6 +285,12 @@ void MR_PopulateDrvRaidMap(struct megasas_instance *instance)
                        fusion->ld_map[(instance->map_id & 1)];
                pFwRaidMap = &fw_map_old->raidMap;
                ld_count = (u16)le32_to_cpu(pFwRaidMap->ldCount);
+               if (ld_count > MAX_LOGICAL_DRIVES) {
+                       dev_dbg(&instance->pdev->dev,
+                               "LD count exposed in RAID map in not valid\n");
+                       return 1;
+               }
+
                pDrvRaidMap->totalSize = pFwRaidMap->totalSize;
                pDrvRaidMap->ldCount = (__le16)cpu_to_le16(ld_count);
                pDrvRaidMap->fpPdIoTimeoutSec = pFwRaidMap->fpPdIoTimeoutSec;
@@ -300,6 +306,8 @@ void MR_PopulateDrvRaidMap(struct megasas_instance *instance)
                        sizeof(struct MR_DEV_HANDLE_INFO) *
                        MAX_RAIDMAP_PHYSICAL_DEVICES);
        }
+
+       return 0;
 }
 
 /*
@@ -317,8 +325,8 @@ u8 MR_ValidateMapInfo(struct megasas_instance *instance)
        u16 ld;
        u32 expected_size;
 
-
-       MR_PopulateDrvRaidMap(instance);
+       if (MR_PopulateDrvRaidMap(instance))
+               return 0;
 
        fusion = instance->ctrl_context;
        drv_map = fusion->ld_drv_map[(instance->map_id & 1)];
index beb4bf8fe9b08202ec313fe2af12080a762a634f..139219c994e9cbcc7026a8fabb94318524cbce94 100644 (file)
@@ -4106,19 +4106,6 @@ scsih_qcmd(struct Scsi_Host *shost, struct scsi_cmnd *scmd)
                return 0;
        }
 
-       /*
-        * Bug work around for firmware SATL handling.  The loop
-        * is based on atomic operations and ensures consistency
-        * since we're lockless at this point
-        */
-       do {
-               if (test_bit(0, &sas_device_priv_data->ata_command_pending)) {
-                       scmd->result = SAM_STAT_BUSY;
-                       scmd->scsi_done(scmd);
-                       return 0;
-               }
-       } while (_scsih_set_satl_pending(scmd, true));
-
        sas_target_priv_data = sas_device_priv_data->sas_target;
 
        /* invalid device handle */
@@ -4144,6 +4131,19 @@ scsih_qcmd(struct Scsi_Host *shost, struct scsi_cmnd *scmd)
            sas_device_priv_data->block)
                return SCSI_MLQUEUE_DEVICE_BUSY;
 
+       /*
+        * Bug work around for firmware SATL handling.  The loop
+        * is based on atomic operations and ensures consistency
+        * since we're lockless at this point
+        */
+       do {
+               if (test_bit(0, &sas_device_priv_data->ata_command_pending)) {
+                       scmd->result = SAM_STAT_BUSY;
+                       scmd->scsi_done(scmd);
+                       return 0;
+               }
+       } while (_scsih_set_satl_pending(scmd, true));
+
        if (scmd->sc_data_direction == DMA_FROM_DEVICE)
                mpi_control = MPI2_SCSIIO_CONTROL_READ;
        else if (scmd->sc_data_direction == DMA_TO_DEVICE)
@@ -4170,6 +4170,7 @@ scsih_qcmd(struct Scsi_Host *shost, struct scsi_cmnd *scmd)
        if (!smid) {
                pr_err(MPT3SAS_FMT "%s: failed obtaining a smid\n",
                    ioc->name, __func__);
+               _scsih_set_satl_pending(scmd, false);
                goto out;
        }
        mpi_request = mpt3sas_base_get_msg_frame(ioc, smid);
@@ -4200,6 +4201,7 @@ scsih_qcmd(struct Scsi_Host *shost, struct scsi_cmnd *scmd)
        if (mpi_request->DataLength) {
                if (ioc->build_sg_scmd(ioc, scmd, smid)) {
                        mpt3sas_base_free_smid(ioc, smid);
+                       _scsih_set_satl_pending(scmd, false);
                        goto out;
                }
        } else
index 092a5fc85b9a4e59ccf4d9c83ca319eaa866fd7d..2770fbd4ce49ff1eb211255f5489f1f9536b5f95 100644 (file)
@@ -797,11 +797,21 @@ static int sh_msiof_dma_once(struct sh_msiof_spi_priv *p, const void *tx,
                goto stop_dma;
        }
 
-       /* wait for tx fifo to be emptied / rx fifo to be filled */
+       /* wait for tx/rx DMA completion */
        ret = sh_msiof_wait_for_completion(p);
        if (ret)
                goto stop_reset;
 
+       if (!rx) {
+               reinit_completion(&p->done);
+               sh_msiof_write(p, IER, IER_TEOFE);
+
+               /* wait for tx fifo to be emptied */
+               ret = sh_msiof_wait_for_completion(p);
+               if (ret)
+                       goto stop_reset;
+       }
+
        /* clear status bits */
        sh_msiof_reset_str(p);
 
index 2da051c0d251cb6ba173b863fc33d54a2cb8a05a..a4bb93b440a51aa471e45a671086ed377a984e76 100644 (file)
@@ -528,19 +528,20 @@ EXPORT_SYMBOL(cfs_cpt_spread_node);
 int
 cfs_cpt_current(struct cfs_cpt_table *cptab, int remap)
 {
-       int cpu = smp_processor_id();
-       int cpt = cptab->ctb_cpu2cpt[cpu];
+       int cpu;
+       int cpt;
 
-       if (cpt < 0) {
-               if (!remap)
-                       return cpt;
+       preempt_disable();
+       cpu = smp_processor_id();
+       cpt = cptab->ctb_cpu2cpt[cpu];
 
+       if (cpt < 0 && remap) {
                /* don't return negative value for safety of upper layer,
                 * instead we shadow the unknown cpu to a valid partition ID
                 */
                cpt = cpu % cptab->ctb_nparts;
        }
-
+       preempt_enable();
        return cpt;
 }
 EXPORT_SYMBOL(cfs_cpt_current);
index 942d094269fba5db66ff7e791dcfaab1c6acec15..c4a5fb6f038fcf7567fa8a7bef2fadffb1fc5e25 100644 (file)
@@ -796,6 +796,13 @@ tcmu_queue_cmd_ring(struct tcmu_cmd *tcmu_cmd)
                int ret;
                DEFINE_WAIT(__wait);
 
+               /*
+                * Don't leave commands partially setup because the unmap
+                * thread might need the blocks to make forward progress.
+                */
+               tcmu_cmd_free_data(tcmu_cmd, tcmu_cmd->dbi_cur);
+               tcmu_cmd_reset_dbi_cur(tcmu_cmd);
+
                prepare_to_wait(&udev->wait_cmdr, &__wait, TASK_INTERRUPTIBLE);
 
                pr_debug("sleeping for ring space\n");
index 8ee38f55c7f36c5e244711bb45bc3171b4d32ba1..43b90fd577e49d86852760b0b917e0660a473c88 100644 (file)
@@ -319,17 +319,21 @@ static int int3400_thermal_probe(struct platform_device *pdev)
 
        result = sysfs_create_group(&pdev->dev.kobj, &uuid_attribute_group);
        if (result)
-               goto free_zone;
+               goto free_rel_misc;
 
        result = acpi_install_notify_handler(
                        priv->adev->handle, ACPI_DEVICE_NOTIFY, int3400_notify,
                        (void *)priv);
        if (result)
-               goto free_zone;
+               goto free_sysfs;
 
        return 0;
 
-free_zone:
+free_sysfs:
+       sysfs_remove_group(&pdev->dev.kobj, &uuid_attribute_group);
+free_rel_misc:
+       if (!priv->rel_misc_dev_res)
+               acpi_thermal_rel_misc_device_remove(priv->adev->handle);
        thermal_zone_device_unregister(priv->thermal);
 free_art_trt:
        kfree(priv->trts);
index b4d3116cfdafe81767b2b1c91fcab4a034f29041..3055f9a12a17087cfee105400b2cca3e68cbb166 100644 (file)
@@ -523,6 +523,7 @@ static void allow_maximum_power(struct thermal_zone_device *tz)
        struct thermal_instance *instance;
        struct power_allocator_params *params = tz->governor_data;
 
+       mutex_lock(&tz->lock);
        list_for_each_entry(instance, &tz->thermal_instances, tz_node) {
                if ((instance->trip != params->trip_max_desired_temperature) ||
                    (!cdev_is_power_actor(instance->cdev)))
@@ -534,6 +535,7 @@ static void allow_maximum_power(struct thermal_zone_device *tz)
                mutex_unlock(&instance->cdev->lock);
                thermal_cdev_update(instance->cdev);
        }
+       mutex_unlock(&tz->lock);
 }
 
 /**
index 0a3c9665e015492f7fbeae5c33e1900961b817c6..7253e8d2c6d98b2164bb48cf6915406ba80e18ac 100644 (file)
@@ -1463,6 +1463,10 @@ static void gsm_dlci_open(struct gsm_dlci *dlci)
  *     in which case an opening port goes back to closed and a closing port
  *     is simply put into closed state (any further frames from the other
  *     end will get a DM response)
+ *
+ *     Some control dlci can stay in ADM mode with other dlci working just
+ *     fine. In that case we can just keep the control dlci open after the
+ *     DLCI_OPENING retries time out.
  */
 
 static void gsm_dlci_t1(unsigned long data)
@@ -1476,8 +1480,15 @@ static void gsm_dlci_t1(unsigned long data)
                if (dlci->retries) {
                        gsm_command(dlci->gsm, dlci->addr, SABM|PF);
                        mod_timer(&dlci->t1, jiffies + gsm->t1 * HZ / 100);
-               } else
+               } else if (!dlci->addr && gsm->control == (DM | PF)) {
+                       if (debug & 8)
+                               pr_info("DLCI %d opening in ADM mode.\n",
+                                       dlci->addr);
+                       gsm_dlci_open(dlci);
+               } else {
                        gsm_dlci_close(dlci);
+               }
+
                break;
        case DLCI_CLOSING:
                dlci->retries--;
@@ -1495,8 +1506,8 @@ static void gsm_dlci_t1(unsigned long data)
  *     @dlci: DLCI to open
  *
  *     Commence opening a DLCI from the Linux side. We issue SABM messages
- *     to the modem which should then reply with a UA, at which point we
- *     will move into open state. Opening is done asynchronously with retry
+ *     to the modem which should then reply with a UA or ADM, at which point
+ *     we will move into open state. Opening is done asynchronously with retry
  *     running off timers and the responses.
  */
 
index 48d5327d38d420d84f49d52aa9a904d2ee174d80..fe5cdda80b2ca3b2837e714f5e3f326300e6a397 100644 (file)
@@ -124,6 +124,13 @@ hv_uio_probe(struct hv_device *dev,
        if (ret)
                goto fail;
 
+       /* Communicating with host has to be via shared memory not hypercall */
+       if (!dev->channel->offermsg.monitor_allocated) {
+               dev_err(&dev->device, "vmbus channel requires hypercall\n");
+               ret = -ENOTSUPP;
+               goto fail_close;
+       }
+
        dev->channel->inbound.ring_buffer->interrupt_mask = 1;
        set_channel_read_mode(dev->channel, HV_CALL_DIRECT);
 
index 082891dffd9d573cc671cf12011d6c660df41da1..b0d606b2d06c34e8df9f4d78914ec995ec6a2d95 100644 (file)
@@ -622,7 +622,7 @@ static int vhost_net_rx_peek_head_len(struct vhost_net *net, struct sock *sk)
 
        if (!len && vq->busyloop_timeout) {
                /* Both tx vq and rx socket were polled here */
-               mutex_lock(&vq->mutex);
+               mutex_lock_nested(&vq->mutex, 1);
                vhost_disable_notify(&net->dev, vq);
 
                preempt_disable();
@@ -755,7 +755,7 @@ static void handle_rx(struct vhost_net *net)
        struct iov_iter fixup;
        __virtio16 num_buffers;
 
-       mutex_lock(&vq->mutex);
+       mutex_lock_nested(&vq->mutex, 0);
        sock = vq->private_data;
        if (!sock)
                goto out;
index a827c1a684a9ab7e3ba215a044be363ba2b08c47..c692e0b13242e9fdc99dec497755f51b8876bb3d 100644 (file)
@@ -213,8 +213,7 @@ int vhost_poll_start(struct vhost_poll *poll, struct file *file)
        if (mask)
                vhost_poll_wakeup(&poll->wait, 0, 0, (void *)mask);
        if (mask & POLLERR) {
-               if (poll->wqh)
-                       remove_wait_queue(poll->wqh, &poll->wait);
+               vhost_poll_stop(poll);
                ret = -EINVAL;
        }
 
@@ -1253,14 +1252,12 @@ static int vq_log_access_ok(struct vhost_virtqueue *vq,
 /* Caller should have vq mutex and device mutex */
 int vhost_vq_access_ok(struct vhost_virtqueue *vq)
 {
-       if (vq->iotlb) {
-               /* When device IOTLB was used, the access validation
-                * will be validated during prefetching.
-                */
-               return 1;
-       }
-       return vq_access_ok(vq, vq->num, vq->desc, vq->avail, vq->used) &&
-               vq_log_access_ok(vq, vq->log_base);
+       int ret = vq_log_access_ok(vq, vq->log_base);
+
+       if (ret || vq->iotlb)
+               return ret;
+
+       return vq_access_ok(vq, vq->num, vq->desc, vq->avail, vq->used);
 }
 EXPORT_SYMBOL_GPL(vhost_vq_access_ok);
 
index d7c239ea3d09f5f47aefca8cd4d9157608f250f3..f5574060f9c82dcb1802059c8bd229aa5f3f122c 100644 (file)
@@ -177,7 +177,7 @@ static int corgi_ssp_lcdtg_send(struct corgi_lcd *lcd, int adrs, uint8_t data)
        struct spi_message msg;
        struct spi_transfer xfer = {
                .len            = 1,
-               .cs_change      = 1,
+               .cs_change      = 0,
                .tx_buf         = lcd->buf,
        };
 
index eab1f842f9c01c1fa74d944e72ec7c6547da12cc..e4bd63e9db6bda265fe9fea5a7f3073bc5661478 100644 (file)
@@ -369,7 +369,7 @@ static int tdo24m_probe(struct spi_device *spi)
 
        spi_message_init(m);
 
-       x->cs_change = 1;
+       x->cs_change = 0;
        x->tx_buf = &lcd->buf[0];
        spi_message_add_tail(x, m);
 
index 6a41ea92737a30590ac1bb5d9bd7e93f683bb4d8..4dc5ee8debeba2fee5f02807f0b8b454f7363ffd 100644 (file)
@@ -49,7 +49,7 @@ static int tosa_tg_send(struct spi_device *spi, int adrs, uint8_t data)
        struct spi_message msg;
        struct spi_transfer xfer = {
                .len            = 1,
-               .cs_change      = 1,
+               .cs_change      = 0,
                .tx_buf         = buf,
        };
 
index da653a080394c9a71ef16d414f2b3ccd5fa6a037..54127905bfe7473d6e034be066291d7fc71f29fe 100644 (file)
@@ -239,8 +239,23 @@ static int vfb_check_var(struct fb_var_screeninfo *var,
  */
 static int vfb_set_par(struct fb_info *info)
 {
+       switch (info->var.bits_per_pixel) {
+       case 1:
+               info->fix.visual = FB_VISUAL_MONO01;
+               break;
+       case 8:
+               info->fix.visual = FB_VISUAL_PSEUDOCOLOR;
+               break;
+       case 16:
+       case 24:
+       case 32:
+               info->fix.visual = FB_VISUAL_TRUECOLOR;
+               break;
+       }
+
        info->fix.line_length = get_line_length(info->var.xres_virtual,
                                                info->var.bits_per_pixel);
+
        return 0;
 }
 
@@ -450,6 +465,8 @@ static int vfb_probe(struct platform_device *dev)
                goto err2;
        platform_set_drvdata(dev, info);
 
+       vfb_set_par(info);
+
        fb_info(info, "Virtual frame buffer device, using %ldK of video memory\n",
                videomemorysize >> 10);
        return 0;
index 36be987ff9efc11290a036740aef16bd0fc96e72..c2f4ff51623015ca32aca20d3ad74ff05fe38e29 100644 (file)
@@ -127,14 +127,27 @@ static int dw_wdt_start(struct watchdog_device *wdd)
 
        dw_wdt_set_timeout(wdd, wdd->timeout);
 
-       set_bit(WDOG_HW_RUNNING, &wdd->status);
-
        writel(WDOG_CONTROL_REG_WDT_EN_MASK,
               dw_wdt->regs + WDOG_CONTROL_REG_OFFSET);
 
        return 0;
 }
 
+static int dw_wdt_stop(struct watchdog_device *wdd)
+{
+       struct dw_wdt *dw_wdt = to_dw_wdt(wdd);
+
+       if (!dw_wdt->rst) {
+               set_bit(WDOG_HW_RUNNING, &wdd->status);
+               return 0;
+       }
+
+       reset_control_assert(dw_wdt->rst);
+       reset_control_deassert(dw_wdt->rst);
+
+       return 0;
+}
+
 static int dw_wdt_restart(struct watchdog_device *wdd,
                          unsigned long action, void *data)
 {
@@ -173,6 +186,7 @@ static const struct watchdog_info dw_wdt_ident = {
 static const struct watchdog_ops dw_wdt_ops = {
        .owner          = THIS_MODULE,
        .start          = dw_wdt_start,
+       .stop           = dw_wdt_stop,
        .ping           = dw_wdt_ping,
        .set_timeout    = dw_wdt_set_timeout,
        .get_timeleft   = dw_wdt_get_timeleft,
index a6e06761da4af44d124dd8caf2b6bea2ba9e5a07..801179170794863348cc0e9f0d582f73089d03e0 100644 (file)
@@ -468,9 +468,11 @@ static void dentry_lru_add(struct dentry *dentry)
  * d_drop() is used mainly for stuff that wants to invalidate a dentry for some
  * reason (NFS timeouts or autofs deletes).
  *
- * __d_drop requires dentry->d_lock.
+ * __d_drop requires dentry->d_lock
+ * ___d_drop doesn't mark dentry as "unhashed"
+ *   (dentry->d_hash.pprev will be LIST_POISON2, not NULL).
  */
-void __d_drop(struct dentry *dentry)
+static void ___d_drop(struct dentry *dentry)
 {
        if (!d_unhashed(dentry)) {
                struct hlist_bl_head *b;
@@ -486,12 +488,17 @@ void __d_drop(struct dentry *dentry)
 
                hlist_bl_lock(b);
                __hlist_bl_del(&dentry->d_hash);
-               dentry->d_hash.pprev = NULL;
                hlist_bl_unlock(b);
                /* After this call, in-progress rcu-walk path lookup will fail. */
                write_seqcount_invalidate(&dentry->d_seq);
        }
 }
+
+void __d_drop(struct dentry *dentry)
+{
+       ___d_drop(dentry);
+       dentry->d_hash.pprev = NULL;
+}
 EXPORT_SYMBOL(__d_drop);
 
 void d_drop(struct dentry *dentry)
@@ -2386,7 +2393,7 @@ EXPORT_SYMBOL(d_delete);
 static void __d_rehash(struct dentry *entry)
 {
        struct hlist_bl_head *b = d_hash(entry->d_name.hash);
-       BUG_ON(!d_unhashed(entry));
+
        hlist_bl_lock(b);
        hlist_bl_add_head_rcu(&entry->d_hash, b);
        hlist_bl_unlock(b);
@@ -2821,9 +2828,9 @@ static void __d_move(struct dentry *dentry, struct dentry *target,
        write_seqcount_begin_nested(&target->d_seq, DENTRY_D_LOCK_NESTED);
 
        /* unhash both */
-       /* __d_drop does write_seqcount_barrier, but they're OK to nest. */
-       __d_drop(dentry);
-       __d_drop(target);
+       /* ___d_drop does write_seqcount_barrier, but they're OK to nest. */
+       ___d_drop(dentry);
+       ___d_drop(target);
 
        /* Switch the names.. */
        if (exchange)
@@ -2835,6 +2842,8 @@ static void __d_move(struct dentry *dentry, struct dentry *target,
        __d_rehash(dentry);
        if (exchange)
                __d_rehash(target);
+       else
+               target->d_hash.pprev = NULL;
 
        /* ... and switch them in the tree */
        if (IS_ROOT(dentry)) {
index 5100ec1b5d559f93b93a12b320feefe148f23b30..86eb33f67618f9a70e7f783cebe81bdbe6d3974e 100644 (file)
@@ -412,7 +412,7 @@ extern const struct clk_ops clk_divider_ro_ops;
 
 unsigned long divider_recalc_rate(struct clk_hw *hw, unsigned long parent_rate,
                unsigned int val, const struct clk_div_table *table,
-               unsigned long flags);
+               unsigned long flags, unsigned long width);
 long divider_round_rate_parent(struct clk_hw *hw, struct clk_hw *parent,
                               unsigned long rate, unsigned long *prate,
                               const struct clk_div_table *table,
index bfb4a9d962a57fd525906596db2d5cd577b845aa..f2f9e957bf1b5e15cef34f84683f57d944f9e8b2 100644 (file)
@@ -794,7 +794,7 @@ struct mlx5_core_dev {
        struct mlx5e_resources  mlx5e_res;
        struct {
                struct mlx5_rsvd_gids   reserved_gids;
-               atomic_t                roce_en;
+               u32                     roce_en;
        } roce;
 #ifdef CONFIG_MLX5_FPGA
        struct mlx5_fpga_device *fpga;
index f7e83f6d2e64ade531f53f366a069c2afcabfdef..236452ebbd9ea68fae4820ae3c2946cac7ada6f1 100644 (file)
@@ -29,6 +29,7 @@
 #include <linux/net_tstamp.h>
 #include <linux/etherdevice.h>
 #include <linux/ethtool.h>
+#include <linux/phy.h>
 #include <net/arp.h>
 #include <net/switchdev.h>
 
@@ -665,8 +666,11 @@ static int vlan_ethtool_get_ts_info(struct net_device *dev,
 {
        const struct vlan_dev_priv *vlan = vlan_dev_priv(dev);
        const struct ethtool_ops *ops = vlan->real_dev->ethtool_ops;
+       struct phy_device *phydev = vlan->real_dev->phydev;
 
-       if (ops->get_ts_info) {
+       if (phydev && phydev->drv && phydev->drv->ts_info) {
+                return phydev->drv->ts_info(phydev, info);
+       } else if (ops->get_ts_info) {
                return ops->get_ts_info(vlan->real_dev, info);
        } else {
                info->so_timestamping = SOF_TIMESTAMPING_RX_SOFTWARE |
index 387af3415385927d45b37cbece334c6f204f302f..4be2a4047640a40319294f9b459a053d659cc5ba 100644 (file)
@@ -1025,7 +1025,7 @@ bool dev_valid_name(const char *name)
 {
        if (*name == '\0')
                return false;
-       if (strlen(name) >= IFNAMSIZ)
+       if (strnlen(name, IFNAMSIZ) == IFNAMSIZ)
                return false;
        if (!strcmp(name, ".") || !strcmp(name, ".."))
                return false;
@@ -2696,7 +2696,7 @@ __be16 skb_network_protocol(struct sk_buff *skb, int *depth)
                if (unlikely(!pskb_may_pull(skb, sizeof(struct ethhdr))))
                        return 0;
 
-               eth = (struct ethhdr *)skb_mac_header(skb);
+               eth = (struct ethhdr *)skb->data;
                type = eth->h_proto;
        }
 
index a1d1f50e0e19c08a2952c7c846c0e0d45c0d66bb..7d9cf26f4bb123630555142f26c68f5118b5cd33 100644 (file)
@@ -437,7 +437,7 @@ static int arp_filter(__be32 sip, __be32 tip, struct net_device *dev)
        /*unsigned long now; */
        struct net *net = dev_net(dev);
 
-       rt = ip_route_output(net, sip, tip, 0, 0);
+       rt = ip_route_output(net, sip, tip, 0, l3mdev_master_ifindex_rcu(dev));
        if (IS_ERR(rt))
                return 1;
        if (rt->dst.dev != dev) {
index 1ee6c0d8dde45a1554a709704bafc2c141893ee7..f39955913d3f107515aabbd2a91af9b44b309b41 100644 (file)
@@ -1755,18 +1755,20 @@ void fib_select_multipath(struct fib_result *res, int hash)
        bool first = false;
 
        for_nexthops(fi) {
+               if (net->ipv4.sysctl_fib_multipath_use_neigh) {
+                       if (!fib_good_nh(nh))
+                               continue;
+                       if (!first) {
+                               res->nh_sel = nhsel;
+                               first = true;
+                       }
+               }
+
                if (hash > atomic_read(&nh->nh_upper_bound))
                        continue;
 
-               if (!net->ipv4.sysctl_fib_multipath_use_neigh ||
-                   fib_good_nh(nh)) {
-                       res->nh_sel = nhsel;
-                       return;
-               }
-               if (!first) {
-                       res->nh_sel = nhsel;
-                       first = true;
-               }
+               res->nh_sel = nhsel;
+               return;
        } endfor_nexthops(fi);
 }
 #endif
index 4e90082b23a6ea7e060d9eacca4014455c357a4c..13f7bbc0168d851b2207049156157aadd32c1e25 100644 (file)
@@ -253,13 +253,14 @@ static struct net_device *__ip_tunnel_create(struct net *net,
        struct net_device *dev;
        char name[IFNAMSIZ];
 
-       if (parms->name[0])
+       err = -E2BIG;
+       if (parms->name[0]) {
+               if (!dev_valid_name(parms->name))
+                       goto failed;
                strlcpy(name, parms->name, IFNAMSIZ);
-       else {
-               if (strlen(ops->kind) > (IFNAMSIZ - 3)) {
-                       err = -E2BIG;
+       } else {
+               if (strlen(ops->kind) > (IFNAMSIZ - 3))
                        goto failed;
-               }
                strlcpy(name, ops->kind, IFNAMSIZ);
                strncat(name, "%d", 2);
        }
index e8ab306794d88adadd94b89f7185b57a3b9f9e8a..4228f3b2f34765a01653aac1fb04f0611031c31e 100644 (file)
@@ -319,11 +319,13 @@ static struct ip6_tnl *ip6gre_tunnel_locate(struct net *net,
        if (t || !create)
                return t;
 
-       if (parms->name[0])
+       if (parms->name[0]) {
+               if (!dev_valid_name(parms->name))
+                       return NULL;
                strlcpy(name, parms->name, IFNAMSIZ);
-       else
+       } else {
                strcpy(name, "ip6gre%d");
-
+       }
        dev = alloc_netdev(sizeof(*t), name, NET_NAME_UNKNOWN,
                           ip6gre_tunnel_setup);
        if (!dev)
index 3763dc01e37477af36d4f1445d72d452e7528017..ffbb81609016ece8a528d42490edccd7e21ed435 100644 (file)
@@ -138,6 +138,14 @@ static int ip6_finish_output(struct net *net, struct sock *sk, struct sk_buff *s
                return ret;
        }
 
+#if defined(CONFIG_NETFILTER) && defined(CONFIG_XFRM)
+       /* Policy lookup after SNAT yielded a new policy */
+       if (skb_dst(skb)->xfrm) {
+               IPCB(skb)->flags |= IPSKB_REROUTED;
+               return dst_output(net, sk, skb);
+       }
+#endif
+
        if ((skb->len > ip6_skb_dst_mtu(skb) && !skb_is_gso(skb)) ||
            dst_allfrag(skb_dst(skb)) ||
            (IP6CB(skb)->frag_max_size && skb->len > IP6CB(skb)->frag_max_size))
@@ -367,6 +375,11 @@ static int ip6_forward_proxy_check(struct sk_buff *skb)
 static inline int ip6_forward_finish(struct net *net, struct sock *sk,
                                     struct sk_buff *skb)
 {
+       struct dst_entry *dst = skb_dst(skb);
+
+       __IP6_INC_STATS(net, ip6_dst_idev(dst), IPSTATS_MIB_OUTFORWDATAGRAMS);
+       __IP6_ADD_STATS(net, ip6_dst_idev(dst), IPSTATS_MIB_OUTOCTETS, skb->len);
+
        return dst_output(net, sk, skb);
 }
 
@@ -560,8 +573,6 @@ int ip6_forward(struct sk_buff *skb)
 
        hdr->hop_limit--;
 
-       __IP6_INC_STATS(net, ip6_dst_idev(dst), IPSTATS_MIB_OUTFORWDATAGRAMS);
-       __IP6_ADD_STATS(net, ip6_dst_idev(dst), IPSTATS_MIB_OUTOCTETS, skb->len);
        return NF_HOOK(NFPROTO_IPV6, NF_INET_FORWARD,
                       net, NULL, skb, skb->dev, dst->dev,
                       ip6_forward_finish);
@@ -1237,7 +1248,7 @@ static int __ip6_append_data(struct sock *sk,
                             const struct sockcm_cookie *sockc)
 {
        struct sk_buff *skb, *skb_prev = NULL;
-       unsigned int maxfraglen, fragheaderlen, mtu, orig_mtu;
+       unsigned int maxfraglen, fragheaderlen, mtu, orig_mtu, pmtu;
        int exthdrlen = 0;
        int dst_exthdrlen = 0;
        int hh_len;
@@ -1273,6 +1284,12 @@ static int __ip6_append_data(struct sock *sk,
                      sizeof(struct frag_hdr) : 0) +
                     rt->rt6i_nfheader_len;
 
+       /* as per RFC 7112 section 5, the entire IPv6 Header Chain must fit
+        * the first fragment
+        */
+       if (headersize + transhdrlen > mtu)
+               goto emsgsize;
+
        if (cork->length + length > mtu - headersize && ipc6->dontfrag &&
            (sk->sk_protocol == IPPROTO_UDP ||
             sk->sk_protocol == IPPROTO_RAW)) {
@@ -1288,9 +1305,8 @@ static int __ip6_append_data(struct sock *sk,
 
        if (cork->length + length > maxnonfragsize - headersize) {
 emsgsize:
-               ipv6_local_error(sk, EMSGSIZE, fl6,
-                                mtu - headersize +
-                                sizeof(struct ipv6hdr));
+               pmtu = max_t(int, mtu - headersize + sizeof(struct ipv6hdr), 0);
+               ipv6_local_error(sk, EMSGSIZE, fl6, pmtu);
                return -EMSGSIZE;
        }
 
index 1161fd5630c18042145f2f96db85010e3b28a720..7e11f6a811f5f0272175904adf97da74c35384b9 100644 (file)
@@ -297,13 +297,16 @@ static struct ip6_tnl *ip6_tnl_create(struct net *net, struct __ip6_tnl_parm *p)
        struct net_device *dev;
        struct ip6_tnl *t;
        char name[IFNAMSIZ];
-       int err = -ENOMEM;
+       int err = -E2BIG;
 
-       if (p->name[0])
+       if (p->name[0]) {
+               if (!dev_valid_name(p->name))
+                       goto failed;
                strlcpy(name, p->name, IFNAMSIZ);
-       else
+       } else {
                sprintf(name, "ip6tnl%%d");
-
+       }
+       err = -ENOMEM;
        dev = alloc_netdev(sizeof(*t), name, NET_NAME_UNKNOWN,
                           ip6_tnl_dev_setup);
        if (!dev)
index bcdc2d557de13914b5046dece683614eb5f8f8c8..7c0f647b5195d2d456360701a540e671c537aa45 100644 (file)
@@ -212,10 +212,13 @@ static struct ip6_tnl *vti6_tnl_create(struct net *net, struct __ip6_tnl_parm *p
        char name[IFNAMSIZ];
        int err;
 
-       if (p->name[0])
+       if (p->name[0]) {
+               if (!dev_valid_name(p->name))
+                       goto failed;
                strlcpy(name, p->name, IFNAMSIZ);
-       else
+       } else {
                sprintf(name, "ip6_vti%%d");
+       }
 
        dev = alloc_netdev(sizeof(*t), name, NET_NAME_UNKNOWN, vti6_dev_setup);
        if (!dev)
index 4b4db63cd21c4a06228383ed51660943c3da5fda..312e6b66ceb533b81cf41773530de5dff982c6bd 100644 (file)
@@ -871,6 +871,9 @@ static struct rt6_info *ip6_pol_route_lookup(struct net *net,
        struct fib6_node *fn;
        struct rt6_info *rt;
 
+       if (fl6->flowi6_flags & FLOWI_FLAG_SKIP_NH_OIF)
+               flags &= ~RT6_LOOKUP_F_IFACE;
+
        read_lock_bh(&table->tb6_lock);
        fn = fib6_lookup(&table->tb6_root, &fl6->daddr, &fl6->saddr);
 restart:
index 7a78dcfda68a17e10e5e951db21d8113c7f65301..f343e6f0fc95a89fa5f386430dc161600367338c 100644 (file)
@@ -16,6 +16,7 @@
 #include <linux/net.h>
 #include <linux/module.h>
 #include <net/ip.h>
+#include <net/ip_tunnels.h>
 #include <net/lwtunnel.h>
 #include <net/netevent.h>
 #include <net/netns/generic.h>
@@ -211,11 +212,6 @@ static int seg6_do_srh(struct sk_buff *skb)
 
        tinfo = seg6_encap_lwtunnel(dst->lwtstate);
 
-       if (likely(!skb->encapsulation)) {
-               skb_reset_inner_headers(skb);
-               skb->encapsulation = 1;
-       }
-
        switch (tinfo->mode) {
        case SEG6_IPTUN_MODE_INLINE:
                if (skb->protocol != htons(ETH_P_IPV6))
@@ -224,10 +220,12 @@ static int seg6_do_srh(struct sk_buff *skb)
                err = seg6_do_srh_inline(skb, tinfo->srh);
                if (err)
                        return err;
-
-               skb_reset_inner_headers(skb);
                break;
        case SEG6_IPTUN_MODE_ENCAP:
+               err = iptunnel_handle_offloads(skb, SKB_GSO_IPXIP6);
+               if (err)
+                       return err;
+
                if (skb->protocol == htons(ETH_P_IPV6))
                        proto = IPPROTO_IPV6;
                else if (skb->protocol == htons(ETH_P_IP))
@@ -239,6 +237,8 @@ static int seg6_do_srh(struct sk_buff *skb)
                if (err)
                        return err;
 
+               skb_set_inner_transport_header(skb, skb_transport_offset(skb));
+               skb_set_inner_protocol(skb, skb->protocol);
                skb->protocol = htons(ETH_P_IPV6);
                break;
        case SEG6_IPTUN_MODE_L2ENCAP:
@@ -262,8 +262,6 @@ static int seg6_do_srh(struct sk_buff *skb)
        ipv6_hdr(skb)->payload_len = htons(skb->len - sizeof(struct ipv6hdr));
        skb_set_transport_header(skb, sizeof(struct ipv6hdr));
 
-       skb_set_inner_protocol(skb, skb->protocol);
-
        return 0;
 }
 
index cac815cc8600873aafa474e180ce30e5770bd1cf..f03c1a56213531faf84a8086c04d1a473d5e1a4d 100644 (file)
@@ -244,11 +244,13 @@ static struct ip_tunnel *ipip6_tunnel_locate(struct net *net,
        if (!create)
                goto failed;
 
-       if (parms->name[0])
+       if (parms->name[0]) {
+               if (!dev_valid_name(parms->name))
+                       goto failed;
                strlcpy(name, parms->name, IFNAMSIZ);
-       else
+       } else {
                strcpy(name, "sit%d");
-
+       }
        dev = alloc_netdev(sizeof(*t), name, NET_NAME_UNKNOWN,
                           ipip6_tunnel_setup);
        if (!dev)
index c28223d8092b18b6e54ac9aed3241dd93a951746..fca69c3771f55d7fb743cea0ae5937a7887c0b89 100644 (file)
@@ -765,6 +765,8 @@ static int l2tp_nl_session_send(struct sk_buff *skb, u32 portid, u32 seq, int fl
 
        if ((session->ifname[0] &&
             nla_put_string(skb, L2TP_ATTR_IFNAME, session->ifname)) ||
+           (session->offset &&
+            nla_put_u16(skb, L2TP_ATTR_OFFSET, session->offset)) ||
            (session->cookie_len &&
             nla_put(skb, L2TP_ATTR_COOKIE, session->cookie_len,
                     &session->cookie[0])) ||
index 84f757c5d91a6607d1020b19caa820e535e13633..288640471c2fa427fc19c719f0a8cfe4279e9fad 100644 (file)
@@ -2373,10 +2373,17 @@ static int ieee80211_set_tx_power(struct wiphy *wiphy,
        struct ieee80211_sub_if_data *sdata;
        enum nl80211_tx_power_setting txp_type = type;
        bool update_txp_type = false;
+       bool has_monitor = false;
 
        if (wdev) {
                sdata = IEEE80211_WDEV_TO_SUB_IF(wdev);
 
+               if (sdata->vif.type == NL80211_IFTYPE_MONITOR) {
+                       sdata = rtnl_dereference(local->monitor_sdata);
+                       if (!sdata)
+                               return -EOPNOTSUPP;
+               }
+
                switch (type) {
                case NL80211_TX_POWER_AUTOMATIC:
                        sdata->user_power_level = IEEE80211_UNSET_POWER_LEVEL;
@@ -2415,15 +2422,34 @@ static int ieee80211_set_tx_power(struct wiphy *wiphy,
 
        mutex_lock(&local->iflist_mtx);
        list_for_each_entry(sdata, &local->interfaces, list) {
+               if (sdata->vif.type == NL80211_IFTYPE_MONITOR) {
+                       has_monitor = true;
+                       continue;
+               }
                sdata->user_power_level = local->user_power_level;
                if (txp_type != sdata->vif.bss_conf.txpower_type)
                        update_txp_type = true;
                sdata->vif.bss_conf.txpower_type = txp_type;
        }
-       list_for_each_entry(sdata, &local->interfaces, list)
+       list_for_each_entry(sdata, &local->interfaces, list) {
+               if (sdata->vif.type == NL80211_IFTYPE_MONITOR)
+                       continue;
                ieee80211_recalc_txpower(sdata, update_txp_type);
+       }
        mutex_unlock(&local->iflist_mtx);
 
+       if (has_monitor) {
+               sdata = rtnl_dereference(local->monitor_sdata);
+               if (sdata) {
+                       sdata->user_power_level = local->user_power_level;
+                       if (txp_type != sdata->vif.bss_conf.txpower_type)
+                               update_txp_type = true;
+                       sdata->vif.bss_conf.txpower_type = txp_type;
+
+                       ieee80211_recalc_txpower(sdata, update_txp_type);
+               }
+       }
+
        return 0;
 }
 
index c7f93fd9ca7aea5ffbccf0f54cafdd62a7138084..4d82fe7d627c26f69b9d97c7b9e7f8aa1f2ed8f9 100644 (file)
@@ -165,7 +165,8 @@ static inline void drv_bss_info_changed(struct ieee80211_local *local,
        if (WARN_ON_ONCE(sdata->vif.type == NL80211_IFTYPE_P2P_DEVICE ||
                         sdata->vif.type == NL80211_IFTYPE_NAN ||
                         (sdata->vif.type == NL80211_IFTYPE_MONITOR &&
-                         !sdata->vif.mu_mimo_owner)))
+                         !sdata->vif.mu_mimo_owner &&
+                         !(changed & BSS_CHANGED_TXPOWER))))
                return;
 
        if (!check_sdata_in_driver(sdata))
index 9219bc134109513219b980c0aac86817665a6bc7..1b86eccf94b69f520e000991b70f0e627ee25e6a 100644 (file)
@@ -1053,6 +1053,9 @@ static int netlink_connect(struct socket *sock, struct sockaddr *addr,
        if (addr->sa_family != AF_NETLINK)
                return -EINVAL;
 
+       if (alen < sizeof(struct sockaddr_nl))
+               return -EINVAL;
+
        if ((nladdr->nl_groups || nladdr->nl_pid) &&
            !netlink_allowed(sock, NL_CFG_F_NONROOT_SEND))
                return -EPERM;
index 75d43dc8e96b4655becce35dcaa5dd570abf4a41..5aa3a64aa4f0ef254bd31ecbfb11c81db0438185 100644 (file)
@@ -114,6 +114,7 @@ static int rds_add_bound(struct rds_sock *rs, __be32 addr, __be16 *port)
                          rs, &addr, (int)ntohs(*port));
                        break;
                } else {
+                       rs->rs_bound_addr = 0;
                        rds_sock_put(rs);
                        ret = -ENOMEM;
                        break;
index 8f2c635149561e741bda88163df063f8cc70f957..4444d7e755e659b715724fdb88c0b6a472eda817 100644 (file)
@@ -133,8 +133,10 @@ static int tcf_dump_walker(struct tcf_idrinfo *idrinfo, struct sk_buff *skb,
                        continue;
 
                nest = nla_nest_start(skb, n_i);
-               if (!nest)
+               if (!nest) {
+                       index--;
                        goto nla_put_failure;
+               }
                err = tcf_action_dump_1(skb, p, 0, 0);
                if (err < 0) {
                        index--;
index c0c707eb2c962520fbc6da655c8b4151bf8a4462..2b087623fb1d1ba31d13176c1b38abaac7c3b4f4 100644 (file)
@@ -248,10 +248,14 @@ static int tcf_bpf_init_from_efd(struct nlattr **tb, struct tcf_bpf_cfg *cfg)
 
 static void tcf_bpf_cfg_cleanup(const struct tcf_bpf_cfg *cfg)
 {
-       if (cfg->is_ebpf)
-               bpf_prog_put(cfg->filter);
-       else
-               bpf_prog_destroy(cfg->filter);
+       struct bpf_prog *filter = cfg->filter;
+
+       if (filter) {
+               if (cfg->is_ebpf)
+                       bpf_prog_put(filter);
+               else
+                       bpf_prog_destroy(filter);
+       }
 
        kfree(cfg->bpf_ops);
        kfree(cfg->bpf_name);
index b642ad3d39dd414d392e492bae091995b01cbd77..6d10b3af479b65ecb086306d05f5c0feec4c2cd5 100644 (file)
@@ -190,7 +190,8 @@ static void tcf_skbmod_cleanup(struct tc_action *a, int bind)
        struct tcf_skbmod_params  *p;
 
        p = rcu_dereference_protected(d->skbmod_p, 1);
-       kfree_rcu(p, rcu);
+       if (p)
+               kfree_rcu(p, rcu);
 }
 
 static int tcf_skbmod_dump(struct sk_buff *skb, struct tc_action *a,
index 22bf1a376b91981d892e061643ebe7ef0c8afd37..7cb63616805d81fb8f8798289bc4e33809510637 100644 (file)
@@ -208,11 +208,12 @@ static void tunnel_key_release(struct tc_action *a, int bind)
        struct tcf_tunnel_key_params *params;
 
        params = rcu_dereference_protected(t->params, 1);
+       if (params) {
+               if (params->tcft_action == TCA_TUNNEL_KEY_ACT_SET)
+                       dst_release(&params->tcft_enc_metadata->dst);
 
-       if (params->tcft_action == TCA_TUNNEL_KEY_ACT_SET)
-               dst_release(&params->tcft_enc_metadata->dst);
-
-       kfree_rcu(params, rcu);
+               kfree_rcu(params, rcu);
+       }
 }
 
 static int tunnel_key_dump_addresses(struct sk_buff *skb,
index f27a9718554c658fc0415e4f6cc21e62a2964c9a..08b5705e73813a189ce89e3b2b24eb3d3ffe29b7 100644 (file)
@@ -728,8 +728,10 @@ static int sctp_v6_addr_to_user(struct sctp_sock *sp, union sctp_addr *addr)
                        sctp_v6_map_v4(addr);
        }
 
-       if (addr->sa.sa_family == AF_INET)
+       if (addr->sa.sa_family == AF_INET) {
+               memset(addr->v4.sin_zero, 0, sizeof(addr->v4.sin_zero));
                return sizeof(struct sockaddr_in);
+       }
        return sizeof(struct sockaddr_in6);
 }
 
index 6b3a862706de5549bf02a806d64b3accd85165a0..2d6f612f32c3e94a7e74054ceb74b401f528126e 100644 (file)
@@ -337,11 +337,14 @@ static struct sctp_af *sctp_sockaddr_af(struct sctp_sock *opt,
        if (!opt->pf->af_supported(addr->sa.sa_family, opt))
                return NULL;
 
-       /* V4 mapped address are really of AF_INET family */
-       if (addr->sa.sa_family == AF_INET6 &&
-           ipv6_addr_v4mapped(&addr->v6.sin6_addr) &&
-           !opt->pf->af_supported(AF_INET, opt))
-               return NULL;
+       if (addr->sa.sa_family == AF_INET6) {
+               if (len < SIN6_LEN_RFC2133)
+                       return NULL;
+               /* V4 mapped address are really of AF_INET family */
+               if (ipv6_addr_v4mapped(&addr->v6.sin6_addr) &&
+                   !opt->pf->af_supported(AF_INET, opt))
+                       return NULL;
+       }
 
        /* If we get this far, af is valid. */
        af = sctp_get_af_specific(addr->sa.sa_family);
index c5fda15ba3193f811151043ac3675a2ebfb15c38..4a3a3f1331ee8f24d573069c1296ef7929bb1362 100644 (file)
@@ -60,7 +60,7 @@ static void strp_abort_strp(struct strparser *strp, int err)
                struct sock *sk = strp->sk;
 
                /* Report an error on the lower socket */
-               sk->sk_err = err;
+               sk->sk_err = -err;
                sk->sk_error_report(sk);
        }
 }
@@ -458,7 +458,7 @@ static void strp_msg_timeout(struct work_struct *w)
        /* Message assembly timed out */
        STRP_STATS_INCR(strp->stats.msg_timeouts);
        strp->cb.lock(strp);
-       strp->cb.abort_parser(strp, ETIMEDOUT);
+       strp->cb.abort_parser(strp, -ETIMEDOUT);
        strp->cb.unlock(strp);
 }
 
index 83d8dda152331054650f6fca0d5d09d160d73108..4eeb9afdc89f9c520097c6c3a8b4bbc9eccc03c2 100644 (file)
@@ -221,7 +221,7 @@ int sst_send_byte_stream_mrfld(struct intel_sst_drv *sst_drv_ctx,
                sst_free_block(sst_drv_ctx, block);
 out:
        test_and_clear_bit(pvt_id, &sst_drv_ctx->pvt_id);
-       return 0;
+       return ret;
 }
 
 /*
index 5bcde01d15e68d31dd6292ba9a7a8deedf7f0161..fbfb76ee23468d259ad61fcc74105da55527b796 100644 (file)
@@ -133,6 +133,7 @@ static const struct snd_soc_dapm_widget cht_dapm_widgets[] = {
        SND_SOC_DAPM_HP("Headphone", NULL),
        SND_SOC_DAPM_MIC("Headset Mic", NULL),
        SND_SOC_DAPM_MIC("Int Mic", NULL),
+       SND_SOC_DAPM_MIC("Int Analog Mic", NULL),
        SND_SOC_DAPM_SPK("Ext Spk", NULL),
        SND_SOC_DAPM_SUPPLY("Platform Clock", SND_SOC_NOPM, 0, 0,
                        platform_clock_control, SND_SOC_DAPM_PRE_PMU | SND_SOC_DAPM_POST_PMD),
@@ -143,6 +144,8 @@ static const struct snd_soc_dapm_route cht_rt5645_audio_map[] = {
        {"IN1N", NULL, "Headset Mic"},
        {"DMIC L1", NULL, "Int Mic"},
        {"DMIC R1", NULL, "Int Mic"},
+       {"IN2P", NULL, "Int Analog Mic"},
+       {"IN2N", NULL, "Int Analog Mic"},
        {"Headphone", NULL, "HPOL"},
        {"Headphone", NULL, "HPOR"},
        {"Ext Spk", NULL, "SPOL"},
@@ -150,6 +153,9 @@ static const struct snd_soc_dapm_route cht_rt5645_audio_map[] = {
        {"Headphone", NULL, "Platform Clock"},
        {"Headset Mic", NULL, "Platform Clock"},
        {"Int Mic", NULL, "Platform Clock"},
+       {"Int Analog Mic", NULL, "Platform Clock"},
+       {"Int Analog Mic", NULL, "micbias1"},
+       {"Int Analog Mic", NULL, "micbias2"},
        {"Ext Spk", NULL, "Platform Clock"},
 };
 
@@ -204,6 +210,7 @@ static const struct snd_kcontrol_new cht_mc_controls[] = {
        SOC_DAPM_PIN_SWITCH("Headphone"),
        SOC_DAPM_PIN_SWITCH("Headset Mic"),
        SOC_DAPM_PIN_SWITCH("Int Mic"),
+       SOC_DAPM_PIN_SWITCH("Int Analog Mic"),
        SOC_DAPM_PIN_SWITCH("Ext Spk"),
 };
 
index 89f70133c8e4e344ea5f8365dd257a0ef47de56e..b74a6040cd96fb4d23d6f989fc4a48f215f567d5 100644 (file)
@@ -404,7 +404,11 @@ int skl_resume_dsp(struct skl *skl)
        if (skl->skl_sst->is_first_boot == true)
                return 0;
 
+       /* disable dynamic clock gating during fw and lib download */
+       ctx->enable_miscbdcge(ctx->dev, false);
+
        ret = skl_dsp_wake(ctx->dsp);
+       ctx->enable_miscbdcge(ctx->dev, true);
        if (ret < 0)
                return ret;
 
index 2b1e513b1680e4c73130710c318e8fe50855c59c..7fe1e8f273a089a0bde0936a6832bb6dc56abac3 100644 (file)
@@ -1332,7 +1332,11 @@ static int skl_platform_soc_probe(struct snd_soc_platform *platform)
                        return -EIO;
                }
 
+               /* disable dynamic clock gating during fw and lib download */
+               skl->skl_sst->enable_miscbdcge(platform->dev, false);
+
                ret = ops->init_fw(platform->dev, skl->skl_sst);
+               skl->skl_sst->enable_miscbdcge(platform->dev, true);
                if (ret < 0) {
                        dev_err(platform->dev, "Failed to boot first fw: %d\n", ret);
                        return ret;
index 9d01d0b1084e28bd911a36bbfde171b5e1a310c1..c8b8b7101c6f9db8ee914f9f8cfd3bf76a2df6ba 100644 (file)
@@ -1385,6 +1385,17 @@ static int update_insn_state(struct instruction *insn, struct insn_state *state)
                                state->vals[op->dest.reg].offset = -state->stack_size;
                        }
 
+                       else if (op->src.reg == CFI_BP && op->dest.reg == CFI_SP &&
+                                cfa->base == CFI_BP) {
+
+                               /*
+                                * mov %rbp, %rsp
+                                *
+                                * Restore the original stack pointer (Clang).
+                                */
+                               state->stack_size = -state->regs[CFI_BP].offset;
+                       }
+
                        else if (op->dest.reg == cfa->base) {
 
                                /* mov %reg, %rsp */
index 9c4e23d8c8cedbf0c6db4c9c13129b197be7938e..53d83d7e6a096a4d49a0b6f0cbfafc15a27866af 100644 (file)
@@ -64,6 +64,14 @@ int arch__compare_symbol_names_n(const char *namea, const char *nameb,
 
        return strncmp(namea, nameb, n);
 }
+
+const char *arch__normalize_symbol_name(const char *name)
+{
+       /* Skip over initial dot */
+       if (name && *name == '.')
+               name++;
+       return name;
+}
 #endif
 
 #if defined(_CALL_ELF) && _CALL_ELF == 2
index 0c95ffefb6ccdbbee39b49a60f0efd8b3977a234..1957abc1c8cf9bc7dc523e15b4935fc7c5e07aa2 100644 (file)
@@ -1856,8 +1856,8 @@ int cmd_record(int argc, const char **argv)
                goto out;
        }
 
-       /* Enable ignoring missing threads when -u option is defined. */
-       rec->opts.ignore_missing_thread = rec->opts.target.uid != UINT_MAX;
+       /* Enable ignoring missing threads when -u/-p option is defined. */
+       rec->opts.ignore_missing_thread = rec->opts.target.uid != UINT_MAX || rec->opts.target.pid;
 
        err = -ENOMEM;
        if (perf_evlist__create_maps(rec->evlist, &rec->opts.target) < 0)
index fae4b03407507a5291820ff45dac3ee6c03dfafb..183c3ed56e0862f15ad2ef599b7383f709d77d60 100644 (file)
@@ -162,12 +162,28 @@ static int hist_iter__branch_callback(struct hist_entry_iter *iter,
        struct hist_entry *he = iter->he;
        struct report *rep = arg;
        struct branch_info *bi;
+       struct perf_sample *sample = iter->sample;
+       struct perf_evsel *evsel = iter->evsel;
+       int err;
+
+       if (!ui__has_annotation())
+               return 0;
+
+       hist__account_cycles(sample->branch_stack, al, sample,
+                            rep->nonany_branch_mode);
 
        bi = he->branch_info;
+       err = addr_map_symbol__inc_samples(&bi->from, sample, evsel->idx);
+       if (err)
+               goto out;
+
+       err = addr_map_symbol__inc_samples(&bi->to, sample, evsel->idx);
+
        branch_type_count(&rep->brtype_stat, &bi->flags,
                          bi->from.addr, bi->to.addr);
 
-       return 0;
+out:
+       return err;
 }
 
 static int process_sample_event(struct perf_tool *tool,
index 1f6beb3d0c6808f82b80979b5467ae26e6ee2b30..ac19130c14d8c31590f1d2483f82fc7433242943 100644 (file)
@@ -1591,10 +1591,46 @@ static int __open_attr__fprintf(FILE *fp, const char *name, const char *val,
        return fprintf(fp, "  %-32s %s\n", name, val);
 }
 
+static void perf_evsel__remove_fd(struct perf_evsel *pos,
+                                 int nr_cpus, int nr_threads,
+                                 int thread_idx)
+{
+       for (int cpu = 0; cpu < nr_cpus; cpu++)
+               for (int thread = thread_idx; thread < nr_threads - 1; thread++)
+                       FD(pos, cpu, thread) = FD(pos, cpu, thread + 1);
+}
+
+static int update_fds(struct perf_evsel *evsel,
+                     int nr_cpus, int cpu_idx,
+                     int nr_threads, int thread_idx)
+{
+       struct perf_evsel *pos;
+
+       if (cpu_idx >= nr_cpus || thread_idx >= nr_threads)
+               return -EINVAL;
+
+       evlist__for_each_entry(evsel->evlist, pos) {
+               nr_cpus = pos != evsel ? nr_cpus : cpu_idx;
+
+               perf_evsel__remove_fd(pos, nr_cpus, nr_threads, thread_idx);
+
+               /*
+                * Since fds for next evsel has not been created,
+                * there is no need to iterate whole event list.
+                */
+               if (pos == evsel)
+                       break;
+       }
+       return 0;
+}
+
 static bool ignore_missing_thread(struct perf_evsel *evsel,
+                                 int nr_cpus, int cpu,
                                  struct thread_map *threads,
                                  int thread, int err)
 {
+       pid_t ignore_pid = thread_map__pid(threads, thread);
+
        if (!evsel->ignore_missing_thread)
                return false;
 
@@ -1610,11 +1646,18 @@ static bool ignore_missing_thread(struct perf_evsel *evsel,
        if (threads->nr == 1)
                return false;
 
+       /*
+        * We should remove fd for missing_thread first
+        * because thread_map__remove() will decrease threads->nr.
+        */
+       if (update_fds(evsel, nr_cpus, cpu, threads->nr, thread))
+               return false;
+
        if (thread_map__remove(threads, thread))
                return false;
 
        pr_warning("WARNING: Ignored open failure for pid %d\n",
-                  thread_map__pid(threads, thread));
+                  ignore_pid);
        return true;
 }
 
@@ -1719,7 +1762,7 @@ retry_open:
                        if (fd < 0) {
                                err = -errno;
 
-                               if (ignore_missing_thread(evsel, threads, thread, err)) {
+                               if (ignore_missing_thread(evsel, cpus->nr, cpu, threads, thread, err)) {
                                        /*
                                         * We just removed 1 thread, so take a step
                                         * back on thread index and lower the upper
index b7aaf9b2294d81224993d220c896a2da43fda308..68786bb7790e68891b296ede25bb2b3b8ea17a0d 100644 (file)
@@ -2625,6 +2625,14 @@ static int get_new_event_name(char *buf, size_t len, const char *base,
 
 out:
        free(nbase);
+
+       /* Final validation */
+       if (ret >= 0 && !is_c_func_name(buf)) {
+               pr_warning("Internal error: \"%s\" is an invalid event name.\n",
+                          buf);
+               ret = -EINVAL;
+       }
+
        return ret;
 }
 
@@ -2792,16 +2800,32 @@ static int find_probe_functions(struct map *map, char *name,
        int found = 0;
        struct symbol *sym;
        struct rb_node *tmp;
+       const char *norm, *ver;
+       char *buf = NULL;
 
        if (map__load(map) < 0)
                return 0;
 
        map__for_each_symbol(map, sym, tmp) {
-               if (strglobmatch(sym->name, name)) {
+               norm = arch__normalize_symbol_name(sym->name);
+               if (!norm)
+                       continue;
+
+               /* We don't care about default symbol or not */
+               ver = strchr(norm, '@');
+               if (ver) {
+                       buf = strndup(norm, ver - norm);
+                       if (!buf)
+                               return -ENOMEM;
+                       norm = buf;
+               }
+               if (strglobmatch(norm, name)) {
                        found++;
                        if (syms && found < probe_conf.max_probes)
                                syms[found - 1] = sym;
                }
+               if (buf)
+                       zfree(&buf);
        }
 
        return found;
@@ -2847,7 +2871,7 @@ static int find_probe_trace_events_from_map(struct perf_probe_event *pev,
         * same name but different addresses, this lists all the symbols.
         */
        num_matched_functions = find_probe_functions(map, pp->function, syms);
-       if (num_matched_functions == 0) {
+       if (num_matched_functions <= 0) {
                pr_err("Failed to find symbol %s in %s\n", pp->function,
                        pev->target ? : "kernel");
                ret = -ENOENT;
index 6492ef38b0907c3e1b3788204fbd2066d5b4d596..4e8dd5fd45fd2876c77450e86b11b092fe5bc769 100644 (file)
@@ -93,6 +93,11 @@ static int prefix_underscores_count(const char *str)
        return tail - str;
 }
 
+const char * __weak arch__normalize_symbol_name(const char *name)
+{
+       return name;
+}
+
 int __weak arch__compare_symbol_names(const char *namea, const char *nameb)
 {
        return strcmp(namea, nameb);
index 6352022593c6b7697f3dc24221961fc8caa516af..698c65e603a8db969cc7509b8dc93267d7a828ea 100644 (file)
@@ -347,6 +347,7 @@ bool elf__needs_adjust_symbols(GElf_Ehdr ehdr);
 void arch__sym_update(struct symbol *s, GElf_Sym *sym);
 #endif
 
+const char *arch__normalize_symbol_name(const char *name);
 #define SYMBOL_A 0
 #define SYMBOL_B 1
 
index 3687b720327ac8ac4b0492cc2d814adbdc20a3ab..cc57c246eadea3be29d9707339a16141e8be118b 100644 (file)
@@ -196,7 +196,7 @@ int copyfile_offset(int ifd, loff_t off_in, int ofd, loff_t off_out, u64 size)
 
                size -= ret;
                off_in += ret;
-               off_out -= ret;
+               off_out += ret;
        }
        munmap(ptr, off_in + size);
 
index 3ab6ec4039059cf127345652bccf6c34d5c4d273..e11fe84de0fd92f9123b9bb75c0745123ad43546 100644 (file)
@@ -259,22 +259,28 @@ static int setup_ip6h(struct ipv6hdr *ip6h, uint16_t payload_len)
        return sizeof(*ip6h);
 }
 
-static void setup_sockaddr(int domain, const char *str_addr, void *sockaddr)
+
+static void setup_sockaddr(int domain, const char *str_addr,
+                          struct sockaddr_storage *sockaddr)
 {
        struct sockaddr_in6 *addr6 = (void *) sockaddr;
        struct sockaddr_in *addr4 = (void *) sockaddr;
 
        switch (domain) {
        case PF_INET:
+               memset(addr4, 0, sizeof(*addr4));
                addr4->sin_family = AF_INET;
                addr4->sin_port = htons(cfg_port);
-               if (inet_pton(AF_INET, str_addr, &(addr4->sin_addr)) != 1)
+               if (str_addr &&
+                   inet_pton(AF_INET, str_addr, &(addr4->sin_addr)) != 1)
                        error(1, 0, "ipv4 parse error: %s", str_addr);
                break;
        case PF_INET6:
+               memset(addr6, 0, sizeof(*addr6));
                addr6->sin6_family = AF_INET6;
                addr6->sin6_port = htons(cfg_port);
-               if (inet_pton(AF_INET6, str_addr, &(addr6->sin6_addr)) != 1)
+               if (str_addr &&
+                   inet_pton(AF_INET6, str_addr, &(addr6->sin6_addr)) != 1)
                        error(1, 0, "ipv6 parse error: %s", str_addr);
                break;
        default:
@@ -603,6 +609,7 @@ static void parse_opts(int argc, char **argv)
                                    sizeof(struct tcphdr) -
                                    40 /* max tcp options */;
        int c;
+       char *daddr = NULL, *saddr = NULL;
 
        cfg_payload_len = max_payload_len;
 
@@ -627,7 +634,7 @@ static void parse_opts(int argc, char **argv)
                        cfg_cpu = strtol(optarg, NULL, 0);
                        break;
                case 'D':
-                       setup_sockaddr(cfg_family, optarg, &cfg_dst_addr);
+                       daddr = optarg;
                        break;
                case 'i':
                        cfg_ifindex = if_nametoindex(optarg);
@@ -638,7 +645,7 @@ static void parse_opts(int argc, char **argv)
                        cfg_cork_mixed = true;
                        break;
                case 'p':
-                       cfg_port = htons(strtoul(optarg, NULL, 0));
+                       cfg_port = strtoul(optarg, NULL, 0);
                        break;
                case 'r':
                        cfg_rx = true;
@@ -647,7 +654,7 @@ static void parse_opts(int argc, char **argv)
                        cfg_payload_len = strtoul(optarg, NULL, 0);
                        break;
                case 'S':
-                       setup_sockaddr(cfg_family, optarg, &cfg_src_addr);
+                       saddr = optarg;
                        break;
                case 't':
                        cfg_runtime_ms = 200 + strtoul(optarg, NULL, 10) * 1000;
@@ -660,6 +667,8 @@ static void parse_opts(int argc, char **argv)
                        break;
                }
        }
+       setup_sockaddr(cfg_family, daddr, &cfg_dst_addr);
+       setup_sockaddr(cfg_family, saddr, &cfg_src_addr);
 
        if (cfg_payload_len > max_payload_len)
                error(1, 0, "-s: payload exceeds max (%d)", max_payload_len);