Merge 5.10.42 into android12-5.10
Changes in 5.10.42
ALSA: hda/realtek: the bass speaker can't output sound on Yoga 9i
ALSA: hda/realtek: Headphone volume is controlled by Front mixer
ALSA: hda/realtek: Chain in pop reduction fixup for ThinkStation P340
ALSA: hda/realtek: fix mute/micmute LEDs for HP 855 G8
ALSA: hda/realtek: fix mute/micmute LEDs and speaker for HP Zbook G8
ALSA: hda/realtek: fix mute/micmute LEDs and speaker for HP Zbook Fury 15 G8
ALSA: hda/realtek: fix mute/micmute LEDs and speaker for HP Zbook Fury 17 G8
ALSA: usb-audio: scarlett2: Fix device hang with ehci-pci
ALSA: usb-audio: scarlett2: Improve driver startup messages
cifs: set server->cipher_type to AES-128-CCM for SMB3.0
NFSv4: Fix a NULL pointer dereference in pnfs_mark_matching_lsegs_return()
iommu/vt-d: Fix sysfs leak in alloc_iommu()
perf intel-pt: Fix sample instruction bytes
perf intel-pt: Fix transaction abort handling
perf scripts python: exported-sql-viewer.py: Fix copy to clipboard from Top Calls by elapsed Time report
perf scripts python: exported-sql-viewer.py: Fix Array TypeError
perf scripts python: exported-sql-viewer.py: Fix warning display
proc: Check /proc/$pid/attr/ writes against file opener
net: hso: fix control-request directions
net/sched: fq_pie: re-factor fix for fq_pie endless loop
net/sched: fq_pie: fix OOB access in the traffic path
netfilter: nft_set_pipapo_avx2: Add irq_fpu_usable() check, fallback to non-AVX2 version
mac80211: assure all fragments are encrypted
mac80211: prevent mixed key and fragment cache attacks
mac80211: properly handle A-MSDUs that start with an RFC 1042 header
cfg80211: mitigate A-MSDU aggregation attacks
mac80211: drop A-MSDUs on old ciphers
mac80211: add fragment cache to sta_info
mac80211: check defrag PN against current frame
mac80211: prevent attacks on TKIP/WEP as well
mac80211: do not accept/forward invalid EAPOL frames
mac80211: extend protection against mixed key and fragment cache attacks
ath10k: add CCMP PN replay protection for fragmented frames for PCIe
ath10k: drop fragments with multicast DA for PCIe
ath10k: drop fragments with multicast DA for SDIO
ath10k: drop MPDU which has discard flag set by firmware for SDIO
ath10k: Fix TKIP Michael MIC verification for PCIe
ath10k: Validate first subframe of A-MSDU before processing the list
ath11k: Clear the fragment cache during key install
dm snapshot: properly fix a crash when an origin has no snapshots
drm/amd/pm: correct MGpuFanBoost setting
drm/amdgpu/vcn1: add cancel_delayed_work_sync before power gate
drm/amdkfd: correct sienna_cichlid SDMA RLC register offset error
drm/amdgpu/vcn2.0: add cancel_delayed_work_sync before power gate
drm/amdgpu/vcn2.5: add cancel_delayed_work_sync before power gate
drm/amdgpu/jpeg2.0: add cancel_delayed_work_sync before power gate
selftests/gpio: Use TEST_GEN_PROGS_EXTENDED
selftests/gpio: Move include of lib.mk up
selftests/gpio: Fix build when source tree is read only
kgdb: fix gcc-11 warnings harder
Documentation: seccomp: Fix user notification documentation
seccomp: Refactor notification handler to prepare for new semantics
serial: core: fix suspicious security_locked_down() call
misc/uss720: fix memory leak in uss720_probe
thunderbolt: usb4: Fix NVM read buffer bounds and offset issue
thunderbolt: dma_port: Fix NVM read buffer bounds and offset issue
KVM: X86: Fix vCPU preempted state from guest's point of view
KVM: arm64: Prevent mixed-width VM creation
mei: request autosuspend after sending rx flow control
staging: iio: cdc: ad7746: avoid overwrite of num_channels
iio: gyro: fxas21002c: balance runtime power in error path
iio: dac: ad5770r: Put fwnode in error case during ->probe()
iio: adc: ad7768-1: Fix too small buffer passed to iio_push_to_buffers_with_timestamp()
iio: adc: ad7124: Fix missbalanced regulator enable / disable on error.
iio: adc: ad7124: Fix potential overflow due to non sequential channel numbers
iio: adc: ad7923: Fix undersized rx buffer.
iio: adc: ad7793: Add missing error code in ad7793_setup()
iio: adc: ad7192: Avoid disabling a clock that was never enabled.
iio: adc: ad7192: handle regulator voltage error first
serial: 8250: Add UART_BUG_TXRACE workaround for Aspeed VUART
serial: 8250_dw: Add device HID for new AMD UART controller
serial: 8250_pci: Add support for new HPE serial device
serial: 8250_pci: handle FL_NOIRQ board flag
USB: trancevibrator: fix control-request direction
Revert "irqbypass: do not start cons/prod when failed connect"
USB: usbfs: Don't WARN about excessively large memory allocations
drivers: base: Fix device link removal
serial: tegra: Fix a mask operation that is always true
serial: sh-sci: Fix off-by-one error in FIFO threshold register setting
serial: rp2: use 'request_firmware' instead of 'request_firmware_nowait'
USB: serial: ti_usb_3410_5052: add startech.com device id
USB: serial: option: add Telit LE910-S1 compositions 0x7010, 0x7011
USB: serial: ftdi_sio: add IDs for IDS GmbH Products
USB: serial: pl2303: add device id for ADLINK ND-6530 GC
thermal/drivers/intel: Initialize RW trip to THERMAL_TEMP_INVALID
usb: dwc3: gadget: Properly track pending and queued SG
usb: gadget: udc: renesas_usb3: Fix a race in usb3_start_pipen()
usb: typec: mux: Fix matching with typec_altmode_desc
net: usb: fix memory leak in smsc75xx_bind
Bluetooth: cmtp: fix file refcount when cmtp_attach_device fails
fs/nfs: Use fatal_signal_pending instead of signal_pending
NFS: fix an incorrect limit in filelayout_decode_layout()
NFS: Fix an Oopsable condition in __nfs_pageio_add_request()
NFS: Don't corrupt the value of pg_bytes_written in nfs_do_recoalesce()
NFSv4: Fix v4.0/v4.1 SEEK_DATA return -ENOTSUPP when set NFS_V4_2 config
drm/meson: fix shutdown crash when component not probed
net/mlx5e: reset XPS on error flow if netdev isn't registered yet
net/mlx5e: Fix multipath lag activation
net/mlx5e: Fix error path of updating netdev queues
{net,vdpa}/mlx5: Configure interface MAC into mpfs L2 table
net/mlx5e: Fix nullptr in add_vlan_push_action()
net/mlx5: Set reformat action when needed for termination rules
net/mlx5e: Fix null deref accessing lag dev
net/mlx4: Fix EEPROM dump support
net/mlx5: Set term table as an unmanaged flow table
SUNRPC in case of backlog, hand free slots directly to waiting task
Revert "net:tipc: Fix a double free in tipc_sk_mcast_rcv"
tipc: wait and exit until all work queues are done
tipc: skb_linearize the head skb when reassembling msgs
spi: spi-fsl-dspi: Fix a resource leak in an error handling path
netfilter: flowtable: Remove redundant hw refresh bit
net: dsa: mt7530: fix VLAN traffic leaks
net: dsa: fix a crash if ->get_sset_count() fails
net: dsa: sja1105: update existing VLANs from the bridge VLAN list
net: dsa: sja1105: use 4095 as the private VLAN for untagged traffic
net: dsa: sja1105: error out on unsupported PHY mode
net: dsa: sja1105: add error handling in sja1105_setup()
net: dsa: sja1105: call dsa_unregister_switch when allocating memory fails
net: dsa: sja1105: fix VL lookup command packing for P/Q/R/S
i2c: s3c2410: fix possible NULL pointer deref on read message after write
i2c: mediatek: Disable i2c start_en and clear intr_stat brfore reset
i2c: i801: Don't generate an interrupt on bus reset
i2c: sh_mobile: Use new clock calculation formulas for RZ/G2E
afs: Fix the nlink handling of dir-over-dir rename
perf jevents: Fix getting maximum number of fds
nvmet-tcp: fix inline data size comparison in nvmet_tcp_queue_response
mptcp: avoid error message on infinite mapping
mptcp: drop unconditional pr_warn on bad opt
mptcp: fix data stream corruption
platform/x86: hp_accel: Avoid invoking _INI to speed up resume
gpio: cadence: Add missing MODULE_DEVICE_TABLE
Revert "crypto: cavium/nitrox - add an error message to explain the failure of pci_request_mem_regions"
Revert "media: usb: gspca: add a missed check for goto_low_power"
Revert "ALSA: sb: fix a missing check of snd_ctl_add"
Revert "serial: max310x: pass return value of spi_register_driver"
serial: max310x: unregister uart driver in case of failure and abort
Revert "net: fujitsu: fix a potential NULL pointer dereference"
net: fujitsu: fix potential null-ptr-deref
Revert "net/smc: fix a NULL pointer dereference"
net/smc: properly handle workqueue allocation failure
Revert "net: caif: replace BUG_ON with recovery code"
net: caif: remove BUG_ON(dev == NULL) in caif_xmit
Revert "char: hpet: fix a missing check of ioremap"
char: hpet: add checks after calling ioremap
Revert "ALSA: gus: add a check of the status of snd_ctl_add"
Revert "ALSA: usx2y: Fix potential NULL pointer dereference"
Revert "isdn: mISDNinfineon: fix potential NULL pointer dereference"
isdn: mISDNinfineon: check/cleanup ioremap failure correctly in setup_io
Revert "ath6kl: return error code in ath6kl_wmi_set_roam_lrssi_cmd()"
ath6kl: return error code in ath6kl_wmi_set_roam_lrssi_cmd()
Revert "isdn: mISDN: Fix potential NULL pointer dereference of kzalloc"
isdn: mISDN: correctly handle ph_info allocation failure in hfcsusb_ph_info
Revert "dmaengine: qcom_hidma: Check for driver register failure"
dmaengine: qcom_hidma: comment platform_driver_register call
Revert "libertas: add checks for the return value of sysfs_create_group"
libertas: register sysfs groups properly
Revert "ASoC: cs43130: fix a NULL pointer dereference"
ASoC: cs43130: handle errors in cs43130_probe() properly
Revert "media: dvb: Add check on sp8870_readreg"
media: dvb: Add check on sp8870_readreg return
Revert "media: gspca: mt9m111: Check write_bridge for timeout"
media: gspca: mt9m111: Check write_bridge for timeout
Revert "media: gspca: Check the return value of write_bridge for timeout"
media: gspca: properly check for errors in po1030_probe()
Revert "net: liquidio: fix a NULL pointer dereference"
net: liquidio: Add missing null pointer checks
Revert "brcmfmac: add a check for the status of usb_register"
brcmfmac: properly check for bus register errors
btrfs: return whole extents in fiemap
scsi: ufs: ufs-mediatek: Fix power down spec violation
scsi: BusLogic: Fix 64-bit system enumeration error for Buslogic
openrisc: Define memory barrier mb
scsi: pm80xx: Fix drives missing during rmmod/insmod loop
btrfs: release path before starting transaction when cloning inline extent
btrfs: do not BUG_ON in link_to_fixup_dir
platform/x86: hp-wireless: add AMD's hardware id to the supported list
platform/x86: intel_punit_ipc: Append MODULE_DEVICE_TABLE for ACPI
platform/x86: touchscreen_dmi: Add info for the Mediacom Winpad 7.0 W700 tablet
SMB3: incorrect file id in requests compounded with open
drm/amd/display: Disconnect non-DP with no EDID
drm/amd/amdgpu: fix refcount leak
drm/amdgpu: Fix a use-after-free
drm/amd/amdgpu: fix a potential deadlock in gpu reset
drm/amdgpu: stop touching sched.ready in the backend
platform/x86: touchscreen_dmi: Add info for the Chuwi Hi10 Pro (CWI529) tablet
block: fix a race between del_gendisk and BLKRRPART
linux/bits.h: fix compilation error with GENMASK
net: netcp: Fix an error message
net: dsa: fix error code getting shifted with 4 in dsa_slave_get_sset_count
interconnect: qcom: bcm-voter: add a missing of_node_put()
interconnect: qcom: Add missing MODULE_DEVICE_TABLE
ASoC: cs42l42: Regmap must use_single_read/write
net: stmmac: Fix MAC WoL not working if PHY does not support WoL
net: ipa: memory region array is variable size
vfio-ccw: Check initialized flag in cp_init()
spi: Assume GPIO CS active high in ACPI case
net: really orphan skbs tied to closing sk
net: packetmmap: fix only tx timestamp on request
net: fec: fix the potential memory leak in fec_enet_init()
chelsio/chtls: unlock on error in chtls_pt_recvmsg()
net: mdio: thunder: Fix a double free issue in the .remove function
net: mdio: octeon: Fix some double free issues
cxgb4/ch_ktls: Clear resources when pf4 device is removed
openvswitch: meter: fix race when getting now_ms.
tls splice: check SPLICE_F_NONBLOCK instead of MSG_DONTWAIT
net: sched: fix packet stuck problem for lockless qdisc
net: sched: fix tx action rescheduling issue during deactivation
net: sched: fix tx action reschedule issue with stopped queue
net: hso: check for allocation failure in hso_create_bulk_serial_device()
net: bnx2: Fix error return code in bnx2_init_board()
bnxt_en: Include new P5 HV definition in VF check.
bnxt_en: Fix context memory setup for 64K page size.
mld: fix panic in mld_newpack()
net/smc: remove device from smcd_dev_list after failed device_add()
gve: Check TX QPL was actually assigned
gve: Update mgmt_msix_idx if num_ntfy changes
gve: Add NULL pointer checks when freeing irqs.
gve: Upgrade memory barrier in poll routine
gve: Correct SKB queue index validation.
iommu/virtio: Add missing MODULE_DEVICE_TABLE
net: hns3: fix incorrect resp_msg issue
net: hns3: put off calling register_netdev() until client initialize complete
iommu/vt-d: Use user privilege for RID2PASID translation
cxgb4: avoid accessing registers when clearing filters
staging: emxx_udc: fix loop in _nbu2ss_nuke()
ASoC: cs35l33: fix an error code in probe()
bpf, offload: Reorder offload callback 'prepare' in verifier
bpf: Set mac_len in bpf_skb_change_head
ixgbe: fix large MTU request from VF
ASoC: qcom: lpass-cpu: Use optional clk APIs
scsi: libsas: Use _safe() loop in sas_resume_port()
net: lantiq: fix memory corruption in RX ring
ipv6: record frag_max_size in atomic fragments in input path
ALSA: usb-audio: scarlett2: snd_scarlett_gen2_controls_create() can be static
net: ethernet: mtk_eth_soc: Fix packet statistics support for MT7628/88
sch_dsmark: fix a NULL deref in qdisc_reset()
net: hsr: fix mac_len checks
MIPS: alchemy: xxs1500: add gpio-au1000.h header file
MIPS: ralink: export rt_sysc_membase for rt2880_wdt.c
net: zero-initialize tc skb extension on allocation
net: mvpp2: add buffer header handling in RX
i915: fix build warning in intel_dp_get_link_status()
samples/bpf: Consider frame size in tx_only of xdpsock sample
net: hns3: check the return of skb_checksum_help()
bpftool: Add sock_release help info for cgroup attach/prog load command
SUNRPC: More fixes for backlog congestion
Revert "Revert "ALSA: usx2y: Fix potential NULL pointer dereference""
net: hso: bail out on interrupt URB allocation failure
scripts/clang-tools: switch explicitly to Python 3
neighbour: Prevent Race condition in neighbour subsytem
usb: core: reduce power-on-good delay time of root hub
Linux 5.10.42
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I05d98d1355a080e0951b4b2ae77f0a9ccb6dfc5d
This commit is contained in:
commit
c5d480cd47
263 changed files with 2241 additions and 940 deletions
|
|
@ -250,14 +250,14 @@ Users can read via ``ioctl(SECCOMP_IOCTL_NOTIF_RECV)`` (or ``poll()``) on a
|
|||
seccomp notification fd to receive a ``struct seccomp_notif``, which contains
|
||||
five members: the input length of the structure, a unique-per-filter ``id``,
|
||||
the ``pid`` of the task which triggered this request (which may be 0 if the
|
||||
task is in a pid ns not visible from the listener's pid namespace), a ``flags``
|
||||
member which for now only has ``SECCOMP_NOTIF_FLAG_SIGNALED``, representing
|
||||
whether or not the notification is a result of a non-fatal signal, and the
|
||||
``data`` passed to seccomp. Userspace can then make a decision based on this
|
||||
information about what to do, and ``ioctl(SECCOMP_IOCTL_NOTIF_SEND)`` a
|
||||
response, indicating what should be returned to userspace. The ``id`` member of
|
||||
``struct seccomp_notif_resp`` should be the same ``id`` as in ``struct
|
||||
seccomp_notif``.
|
||||
task is in a pid ns not visible from the listener's pid namespace). The
|
||||
notification also contains the ``data`` passed to seccomp, and a filters flag.
|
||||
The structure should be zeroed out prior to calling the ioctl.
|
||||
|
||||
Userspace can then make a decision based on this information about what to do,
|
||||
and ``ioctl(SECCOMP_IOCTL_NOTIF_SEND)`` a response, indicating what should be
|
||||
returned to userspace. The ``id`` member of ``struct seccomp_notif_resp`` should
|
||||
be the same ``id`` as in ``struct seccomp_notif``.
|
||||
|
||||
It is worth noting that ``struct seccomp_data`` contains the values of register
|
||||
arguments to the syscall, but does not contain pointers to memory. The task's
|
||||
|
|
|
|||
2
Makefile
2
Makefile
|
|
@ -1,7 +1,7 @@
|
|||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 5
|
||||
PATCHLEVEL = 10
|
||||
SUBLEVEL = 41
|
||||
SUBLEVEL = 42
|
||||
EXTRAVERSION =
|
||||
NAME = Dare mighty things
|
||||
|
||||
|
|
|
|||
|
|
@ -463,4 +463,9 @@ static __always_inline void kvm_incr_pc(struct kvm_vcpu *vcpu)
|
|||
vcpu->arch.flags |= KVM_ARM64_INCREMENT_PC;
|
||||
}
|
||||
|
||||
static inline bool vcpu_has_feature(struct kvm_vcpu *vcpu, int feature)
|
||||
{
|
||||
return test_bit(feature, vcpu->arch.features);
|
||||
}
|
||||
|
||||
#endif /* __ARM64_KVM_EMULATE_H__ */
|
||||
|
|
|
|||
|
|
@ -166,6 +166,25 @@ static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static bool vcpu_allowed_register_width(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
struct kvm_vcpu *tmp;
|
||||
bool is32bit;
|
||||
int i;
|
||||
|
||||
is32bit = vcpu_has_feature(vcpu, KVM_ARM_VCPU_EL1_32BIT);
|
||||
if (!cpus_have_const_cap(ARM64_HAS_32BIT_EL1) && is32bit)
|
||||
return false;
|
||||
|
||||
/* Check that the vcpus are either all 32bit or all 64bit */
|
||||
kvm_for_each_vcpu(i, tmp, vcpu->kvm) {
|
||||
if (vcpu_has_feature(tmp, KVM_ARM_VCPU_EL1_32BIT) != is32bit)
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* kvm_reset_vcpu - sets core registers and sys_regs to reset value
|
||||
* @vcpu: The VCPU pointer
|
||||
|
|
@ -217,13 +236,14 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
|
|||
}
|
||||
}
|
||||
|
||||
if (!vcpu_allowed_register_width(vcpu)) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
switch (vcpu->arch.target) {
|
||||
default:
|
||||
if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) {
|
||||
if (!cpus_have_const_cap(ARM64_HAS_32BIT_EL1)) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
pstate = VCPU_RESET_PSTATE_SVC;
|
||||
} else {
|
||||
pstate = VCPU_RESET_PSTATE_EL1;
|
||||
|
|
|
|||
|
|
@ -18,6 +18,7 @@
|
|||
#include <asm/reboot.h>
|
||||
#include <asm/setup.h>
|
||||
#include <asm/mach-au1x00/au1000.h>
|
||||
#include <asm/mach-au1x00/gpio-au1000.h>
|
||||
#include <prom.h>
|
||||
|
||||
const char *get_system_type(void)
|
||||
|
|
|
|||
|
|
@ -8,6 +8,7 @@
|
|||
|
||||
#include <linux/io.h>
|
||||
#include <linux/clk.h>
|
||||
#include <linux/export.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/sizes.h>
|
||||
#include <linux/of_fdt.h>
|
||||
|
|
@ -25,6 +26,7 @@
|
|||
|
||||
__iomem void *rt_sysc_membase;
|
||||
__iomem void *rt_memc_membase;
|
||||
EXPORT_SYMBOL_GPL(rt_sysc_membase);
|
||||
|
||||
__iomem void *plat_of_remap_node(const char *node)
|
||||
{
|
||||
|
|
|
|||
9
arch/openrisc/include/asm/barrier.h
Normal file
9
arch/openrisc/include/asm/barrier.h
Normal file
|
|
@ -0,0 +1,9 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
#ifndef __ASM_BARRIER_H
|
||||
#define __ASM_BARRIER_H
|
||||
|
||||
#define mb() asm volatile ("l.msync" ::: "memory")
|
||||
|
||||
#include <asm-generic/barrier.h>
|
||||
|
||||
#endif /* __ASM_BARRIER_H */
|
||||
|
|
@ -3006,6 +3006,8 @@ static void record_steal_time(struct kvm_vcpu *vcpu)
|
|||
st->preempted & KVM_VCPU_FLUSH_TLB);
|
||||
if (xchg(&st->preempted, 0) & KVM_VCPU_FLUSH_TLB)
|
||||
kvm_vcpu_flush_tlb_guest(vcpu);
|
||||
} else {
|
||||
st->preempted = 0;
|
||||
}
|
||||
|
||||
vcpu->arch.st.preempted = 0;
|
||||
|
|
|
|||
|
|
@ -226,6 +226,7 @@ static const struct acpi_device_id acpi_apd_device_ids[] = {
|
|||
{ "AMDI0010", APD_ADDR(wt_i2c_desc) },
|
||||
{ "AMD0020", APD_ADDR(cz_uart_desc) },
|
||||
{ "AMDI0020", APD_ADDR(cz_uart_desc) },
|
||||
{ "AMDI0022", APD_ADDR(cz_uart_desc) },
|
||||
{ "AMD0030", },
|
||||
{ "AMD0040", APD_ADDR(fch_misc_desc)},
|
||||
{ "HYGO0010", APD_ADDR(wt_i2c_desc) },
|
||||
|
|
|
|||
|
|
@ -191,6 +191,11 @@ int device_links_read_lock_held(void)
|
|||
{
|
||||
return srcu_read_lock_held(&device_links_srcu);
|
||||
}
|
||||
|
||||
static void device_link_synchronize_removal(void)
|
||||
{
|
||||
synchronize_srcu(&device_links_srcu);
|
||||
}
|
||||
#else /* !CONFIG_SRCU */
|
||||
static DECLARE_RWSEM(device_links_lock);
|
||||
|
||||
|
|
@ -221,6 +226,10 @@ int device_links_read_lock_held(void)
|
|||
return lockdep_is_held(&device_links_lock);
|
||||
}
|
||||
#endif
|
||||
|
||||
static inline void device_link_synchronize_removal(void)
|
||||
{
|
||||
}
|
||||
#endif /* !CONFIG_SRCU */
|
||||
|
||||
static bool device_is_ancestor(struct device *dev, struct device *target)
|
||||
|
|
@ -442,8 +451,13 @@ static struct attribute *devlink_attrs[] = {
|
|||
};
|
||||
ATTRIBUTE_GROUPS(devlink);
|
||||
|
||||
static void device_link_free(struct device_link *link)
|
||||
static void device_link_release_fn(struct work_struct *work)
|
||||
{
|
||||
struct device_link *link = container_of(work, struct device_link, rm_work);
|
||||
|
||||
/* Ensure that all references to the link object have been dropped. */
|
||||
device_link_synchronize_removal();
|
||||
|
||||
while (refcount_dec_not_one(&link->rpm_active))
|
||||
pm_runtime_put(link->supplier);
|
||||
|
||||
|
|
@ -452,24 +466,19 @@ static void device_link_free(struct device_link *link)
|
|||
kfree(link);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_SRCU
|
||||
static void __device_link_free_srcu(struct rcu_head *rhead)
|
||||
{
|
||||
device_link_free(container_of(rhead, struct device_link, rcu_head));
|
||||
}
|
||||
|
||||
static void devlink_dev_release(struct device *dev)
|
||||
{
|
||||
struct device_link *link = to_devlink(dev);
|
||||
|
||||
call_srcu(&device_links_srcu, &link->rcu_head, __device_link_free_srcu);
|
||||
INIT_WORK(&link->rm_work, device_link_release_fn);
|
||||
/*
|
||||
* It may take a while to complete this work because of the SRCU
|
||||
* synchronization in device_link_release_fn() and if the consumer or
|
||||
* supplier devices get deleted when it runs, so put it into the "long"
|
||||
* workqueue.
|
||||
*/
|
||||
queue_work(system_long_wq, &link->rm_work);
|
||||
}
|
||||
#else
|
||||
static void devlink_dev_release(struct device *dev)
|
||||
{
|
||||
device_link_free(to_devlink(dev));
|
||||
}
|
||||
#endif
|
||||
|
||||
static struct class devlink_class = {
|
||||
.name = "devlink",
|
||||
|
|
|
|||
|
|
@ -984,6 +984,8 @@ static acpi_status hpet_resources(struct acpi_resource *res, void *data)
|
|||
hdp->hd_phys_address = fixmem32->address;
|
||||
hdp->hd_address = ioremap(fixmem32->address,
|
||||
HPET_RANGE_SIZE);
|
||||
if (!hdp->hd_address)
|
||||
return AE_ERROR;
|
||||
|
||||
if (hpet_is_known(hdp)) {
|
||||
iounmap(hdp->hd_address);
|
||||
|
|
|
|||
|
|
@ -451,7 +451,6 @@ static int nitrox_probe(struct pci_dev *pdev,
|
|||
err = pci_request_mem_regions(pdev, nitrox_driver_name);
|
||||
if (err) {
|
||||
pci_disable_device(pdev);
|
||||
dev_err(&pdev->dev, "Failed to request mem regions!\n");
|
||||
return err;
|
||||
}
|
||||
pci_set_master(pdev);
|
||||
|
|
|
|||
|
|
@ -418,8 +418,23 @@ static int __init hidma_mgmt_init(void)
|
|||
hidma_mgmt_of_populate_channels(child);
|
||||
}
|
||||
#endif
|
||||
return platform_driver_register(&hidma_mgmt_driver);
|
||||
/*
|
||||
* We do not check for return value here, as it is assumed that
|
||||
* platform_driver_register must not fail. The reason for this is that
|
||||
* the (potential) hidma_mgmt_of_populate_channels calls above are not
|
||||
* cleaned up if it does fail, and to do this work is quite
|
||||
* complicated. In particular, various calls of of_address_to_resource,
|
||||
* of_irq_to_resource, platform_device_register_full, of_dma_configure,
|
||||
* and of_msi_configure which then call other functions and so on, must
|
||||
* be cleaned up - this is not a trivial exercise.
|
||||
*
|
||||
* Currently, this module is not intended to be unloaded, and there is
|
||||
* no module_exit function defined which does the needed cleanup. For
|
||||
* this reason, we have to assume success here.
|
||||
*/
|
||||
platform_driver_register(&hidma_mgmt_driver);
|
||||
|
||||
return 0;
|
||||
}
|
||||
module_init(hidma_mgmt_init);
|
||||
MODULE_LICENSE("GPL v2");
|
||||
|
|
|
|||
|
|
@ -278,6 +278,7 @@ static const struct of_device_id cdns_of_ids[] = {
|
|||
{ .compatible = "cdns,gpio-r1p02" },
|
||||
{ /* sentinel */ },
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, cdns_of_ids);
|
||||
|
||||
static struct platform_driver cdns_gpio_driver = {
|
||||
.driver = {
|
||||
|
|
|
|||
|
|
@ -157,16 +157,16 @@ static uint32_t get_sdma_rlc_reg_offset(struct amdgpu_device *adev,
|
|||
mmSDMA0_RLC0_RB_CNTL) - mmSDMA0_RLC0_RB_CNTL;
|
||||
break;
|
||||
case 1:
|
||||
sdma_engine_reg_base = SOC15_REG_OFFSET(SDMA1, 0,
|
||||
sdma_engine_reg_base = SOC15_REG_OFFSET(SDMA0, 0,
|
||||
mmSDMA1_RLC0_RB_CNTL) - mmSDMA0_RLC0_RB_CNTL;
|
||||
break;
|
||||
case 2:
|
||||
sdma_engine_reg_base = SOC15_REG_OFFSET(SDMA2, 0,
|
||||
mmSDMA2_RLC0_RB_CNTL) - mmSDMA2_RLC0_RB_CNTL;
|
||||
sdma_engine_reg_base = SOC15_REG_OFFSET(SDMA0, 0,
|
||||
mmSDMA2_RLC0_RB_CNTL) - mmSDMA0_RLC0_RB_CNTL;
|
||||
break;
|
||||
case 3:
|
||||
sdma_engine_reg_base = SOC15_REG_OFFSET(SDMA3, 0,
|
||||
mmSDMA3_RLC0_RB_CNTL) - mmSDMA2_RLC0_RB_CNTL;
|
||||
sdma_engine_reg_base = SOC15_REG_OFFSET(SDMA0, 0,
|
||||
mmSDMA3_RLC0_RB_CNTL) - mmSDMA0_RLC0_RB_CNTL;
|
||||
break;
|
||||
}
|
||||
|
||||
|
|
@ -451,7 +451,7 @@ static int hqd_sdma_dump_v10_3(struct kgd_dev *kgd,
|
|||
engine_id, queue_id);
|
||||
uint32_t i = 0, reg;
|
||||
#undef HQD_N_REGS
|
||||
#define HQD_N_REGS (19+6+7+10)
|
||||
#define HQD_N_REGS (19+6+7+12)
|
||||
|
||||
*dump = kmalloc(HQD_N_REGS*2*sizeof(uint32_t), GFP_KERNEL);
|
||||
if (*dump == NULL)
|
||||
|
|
|
|||
|
|
@ -4368,7 +4368,6 @@ out:
|
|||
r = amdgpu_ib_ring_tests(tmp_adev);
|
||||
if (r) {
|
||||
dev_err(tmp_adev->dev, "ib ring test failed (%d).\n", r);
|
||||
r = amdgpu_device_ip_suspend(tmp_adev);
|
||||
need_full_reset = true;
|
||||
r = -EAGAIN;
|
||||
goto end;
|
||||
|
|
|
|||
|
|
@ -289,10 +289,13 @@ out:
|
|||
static int amdgpu_fbdev_destroy(struct drm_device *dev, struct amdgpu_fbdev *rfbdev)
|
||||
{
|
||||
struct amdgpu_framebuffer *rfb = &rfbdev->rfb;
|
||||
int i;
|
||||
|
||||
drm_fb_helper_unregister_fbi(&rfbdev->helper);
|
||||
|
||||
if (rfb->base.obj[0]) {
|
||||
for (i = 0; i < rfb->base.format->num_planes; i++)
|
||||
drm_gem_object_put(rfb->base.obj[0]);
|
||||
amdgpufb_destroy_pinned_object(rfb->base.obj[0]);
|
||||
rfb->base.obj[0] = NULL;
|
||||
drm_framebuffer_unregister_private(&rfb->base);
|
||||
|
|
|
|||
|
|
@ -1381,6 +1381,7 @@ static void amdgpu_ttm_tt_unpopulate(struct ttm_bo_device *bdev, struct ttm_tt *
|
|||
if (gtt && gtt->userptr) {
|
||||
amdgpu_ttm_tt_set_user_pages(ttm, NULL);
|
||||
kfree(ttm->sg);
|
||||
ttm->sg = NULL;
|
||||
ttm->page_flags &= ~TTM_PAGE_FLAG_SG;
|
||||
return;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -172,6 +172,8 @@ static int jpeg_v2_0_hw_fini(void *handle)
|
|||
{
|
||||
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
|
||||
|
||||
cancel_delayed_work_sync(&adev->vcn.idle_work);
|
||||
|
||||
if (adev->jpeg.cur_state != AMD_PG_STATE_GATE &&
|
||||
RREG32_SOC15(JPEG, 0, mmUVD_JRBC_STATUS))
|
||||
jpeg_v2_0_set_powergating_state(adev, AMD_PG_STATE_GATE);
|
||||
|
|
|
|||
|
|
@ -198,8 +198,6 @@ static int jpeg_v2_5_hw_fini(void *handle)
|
|||
if (adev->jpeg.cur_state != AMD_PG_STATE_GATE &&
|
||||
RREG32_SOC15(JPEG, i, mmUVD_JRBC_STATUS))
|
||||
jpeg_v2_5_set_powergating_state(adev, AMD_PG_STATE_GATE);
|
||||
|
||||
ring->sched.ready = false;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
|
|
|||
|
|
@ -166,8 +166,6 @@ static int jpeg_v3_0_hw_fini(void *handle)
|
|||
RREG32_SOC15(JPEG, 0, mmUVD_JRBC_STATUS))
|
||||
jpeg_v3_0_set_powergating_state(adev, AMD_PG_STATE_GATE);
|
||||
|
||||
ring->sched.ready = false;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -476,11 +476,6 @@ static void sdma_v5_2_gfx_stop(struct amdgpu_device *adev)
|
|||
ib_cntl = REG_SET_FIELD(ib_cntl, SDMA0_GFX_IB_CNTL, IB_ENABLE, 0);
|
||||
WREG32(sdma_v5_2_get_reg_offset(adev, i, mmSDMA0_GFX_IB_CNTL), ib_cntl);
|
||||
}
|
||||
|
||||
sdma0->sched.ready = false;
|
||||
sdma1->sched.ready = false;
|
||||
sdma2->sched.ready = false;
|
||||
sdma3->sched.ready = false;
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
|||
|
|
@ -232,9 +232,13 @@ static int vcn_v1_0_hw_fini(void *handle)
|
|||
{
|
||||
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
|
||||
|
||||
cancel_delayed_work_sync(&adev->vcn.idle_work);
|
||||
|
||||
if ((adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG) ||
|
||||
RREG32_SOC15(VCN, 0, mmUVD_STATUS))
|
||||
(adev->vcn.cur_state != AMD_PG_STATE_GATE &&
|
||||
RREG32_SOC15(VCN, 0, mmUVD_STATUS))) {
|
||||
vcn_v1_0_set_powergating_state(adev, AMD_PG_STATE_GATE);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -262,6 +262,8 @@ static int vcn_v2_0_hw_fini(void *handle)
|
|||
{
|
||||
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
|
||||
|
||||
cancel_delayed_work_sync(&adev->vcn.idle_work);
|
||||
|
||||
if ((adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG) ||
|
||||
(adev->vcn.cur_state != AMD_PG_STATE_GATE &&
|
||||
RREG32_SOC15(VCN, 0, mmUVD_STATUS)))
|
||||
|
|
|
|||
|
|
@ -321,6 +321,8 @@ static int vcn_v2_5_hw_fini(void *handle)
|
|||
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
|
||||
int i;
|
||||
|
||||
cancel_delayed_work_sync(&adev->vcn.idle_work);
|
||||
|
||||
for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
|
||||
if (adev->vcn.harvest_config & (1 << i))
|
||||
continue;
|
||||
|
|
|
|||
|
|
@ -346,7 +346,7 @@ static int vcn_v3_0_hw_fini(void *handle)
|
|||
{
|
||||
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
|
||||
struct amdgpu_ring *ring;
|
||||
int i, j;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
|
||||
if (adev->vcn.harvest_config & (1 << i))
|
||||
|
|
@ -361,12 +361,6 @@ static int vcn_v3_0_hw_fini(void *handle)
|
|||
vcn_v3_0_set_powergating_state(adev, AMD_PG_STATE_GATE);
|
||||
}
|
||||
}
|
||||
ring->sched.ready = false;
|
||||
|
||||
for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
|
||||
ring = &adev->vcn.inst[i].ring_enc[j];
|
||||
ring->sched.ready = false;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
|
|
|||
|
|
@ -1049,6 +1049,24 @@ static bool dc_link_detect_helper(struct dc_link *link,
|
|||
dc_is_dvi_signal(link->connector_signal)) {
|
||||
if (prev_sink)
|
||||
dc_sink_release(prev_sink);
|
||||
link_disconnect_sink(link);
|
||||
|
||||
return false;
|
||||
}
|
||||
/*
|
||||
* Abort detection for DP connectors if we have
|
||||
* no EDID and connector is active converter
|
||||
* as there are no display downstream
|
||||
*
|
||||
*/
|
||||
if (dc_is_dp_sst_signal(link->connector_signal) &&
|
||||
(link->dpcd_caps.dongle_type ==
|
||||
DISPLAY_DONGLE_DP_VGA_CONVERTER ||
|
||||
link->dpcd_caps.dongle_type ==
|
||||
DISPLAY_DONGLE_DP_DVI_CONVERTER)) {
|
||||
if (prev_sink)
|
||||
dc_sink_release(prev_sink);
|
||||
link_disconnect_sink(link);
|
||||
|
||||
return false;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -2606,6 +2606,8 @@ static ssize_t navi10_get_gpu_metrics(struct smu_context *smu,
|
|||
|
||||
static int navi10_enable_mgpu_fan_boost(struct smu_context *smu)
|
||||
{
|
||||
struct smu_table_context *table_context = &smu->smu_table;
|
||||
PPTable_t *smc_pptable = table_context->driver_pptable;
|
||||
struct amdgpu_device *adev = smu->adev;
|
||||
uint32_t param = 0;
|
||||
|
||||
|
|
@ -2613,6 +2615,13 @@ static int navi10_enable_mgpu_fan_boost(struct smu_context *smu)
|
|||
if (adev->asic_type == CHIP_NAVI12)
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* Skip the MGpuFanBoost setting for those ASICs
|
||||
* which do not support it
|
||||
*/
|
||||
if (!smc_pptable->MGpuFanBoostLimitRpm)
|
||||
return 0;
|
||||
|
||||
/* Workaround for WS SKU */
|
||||
if (adev->pdev->device == 0x7312 &&
|
||||
adev->pdev->revision == 0)
|
||||
|
|
|
|||
|
|
@ -2715,6 +2715,16 @@ static ssize_t sienna_cichlid_get_gpu_metrics(struct smu_context *smu,
|
|||
|
||||
static int sienna_cichlid_enable_mgpu_fan_boost(struct smu_context *smu)
|
||||
{
|
||||
struct smu_table_context *table_context = &smu->smu_table;
|
||||
PPTable_t *smc_pptable = table_context->driver_pptable;
|
||||
|
||||
/*
|
||||
* Skip the MGpuFanBoost setting for those ASICs
|
||||
* which do not support it
|
||||
*/
|
||||
if (!smc_pptable->MGpuFanBoostLimitRpm)
|
||||
return 0;
|
||||
|
||||
return smu_cmn_send_smc_msg_with_param(smu,
|
||||
SMU_MSG_SetMGpuFanBoostLimitRpm,
|
||||
0,
|
||||
|
|
|
|||
|
|
@ -4136,7 +4136,7 @@ static void chv_dp_post_pll_disable(struct intel_atomic_state *state,
|
|||
* link status information
|
||||
*/
|
||||
bool
|
||||
intel_dp_get_link_status(struct intel_dp *intel_dp, u8 link_status[DP_LINK_STATUS_SIZE])
|
||||
intel_dp_get_link_status(struct intel_dp *intel_dp, u8 *link_status)
|
||||
{
|
||||
return drm_dp_dpcd_read(&intel_dp->aux, DP_LANE0_1_STATUS, link_status,
|
||||
DP_LINK_STATUS_SIZE) == DP_LINK_STATUS_SIZE;
|
||||
|
|
|
|||
|
|
@ -485,11 +485,12 @@ static int meson_probe_remote(struct platform_device *pdev,
|
|||
static void meson_drv_shutdown(struct platform_device *pdev)
|
||||
{
|
||||
struct meson_drm *priv = dev_get_drvdata(&pdev->dev);
|
||||
struct drm_device *drm = priv->drm;
|
||||
|
||||
DRM_DEBUG_DRIVER("\n");
|
||||
drm_kms_helper_poll_fini(drm);
|
||||
drm_atomic_helper_shutdown(drm);
|
||||
if (!priv)
|
||||
return;
|
||||
|
||||
drm_kms_helper_poll_fini(priv->drm);
|
||||
drm_atomic_helper_shutdown(priv->drm);
|
||||
}
|
||||
|
||||
static int meson_drv_probe(struct platform_device *pdev)
|
||||
|
|
|
|||
|
|
@ -391,11 +391,9 @@ static int i801_check_post(struct i801_priv *priv, int status)
|
|||
dev_err(&priv->pci_dev->dev, "Transaction timeout\n");
|
||||
/* try to stop the current command */
|
||||
dev_dbg(&priv->pci_dev->dev, "Terminating the current operation\n");
|
||||
outb_p(inb_p(SMBHSTCNT(priv)) | SMBHSTCNT_KILL,
|
||||
SMBHSTCNT(priv));
|
||||
outb_p(SMBHSTCNT_KILL, SMBHSTCNT(priv));
|
||||
usleep_range(1000, 2000);
|
||||
outb_p(inb_p(SMBHSTCNT(priv)) & (~SMBHSTCNT_KILL),
|
||||
SMBHSTCNT(priv));
|
||||
outb_p(0, SMBHSTCNT(priv));
|
||||
|
||||
/* Check if it worked */
|
||||
status = inb_p(SMBHSTSTS(priv));
|
||||
|
|
|
|||
|
|
@ -478,6 +478,11 @@ static void mtk_i2c_clock_disable(struct mtk_i2c *i2c)
|
|||
static void mtk_i2c_init_hw(struct mtk_i2c *i2c)
|
||||
{
|
||||
u16 control_reg;
|
||||
u16 intr_stat_reg;
|
||||
|
||||
mtk_i2c_writew(i2c, I2C_CHN_CLR_FLAG, OFFSET_START);
|
||||
intr_stat_reg = mtk_i2c_readw(i2c, OFFSET_INTR_STAT);
|
||||
mtk_i2c_writew(i2c, intr_stat_reg, OFFSET_INTR_STAT);
|
||||
|
||||
if (i2c->dev_comp->apdma_sync) {
|
||||
writel(I2C_DMA_WARM_RST, i2c->pdmabase + OFFSET_RST);
|
||||
|
|
|
|||
|
|
@ -483,7 +483,10 @@ static int i2c_s3c_irq_nextbyte(struct s3c24xx_i2c *i2c, unsigned long iicstat)
|
|||
* forces us to send a new START
|
||||
* when we change direction
|
||||
*/
|
||||
dev_dbg(i2c->dev,
|
||||
"missing START before write->read\n");
|
||||
s3c24xx_i2c_stop(i2c, -EINVAL);
|
||||
break;
|
||||
}
|
||||
|
||||
goto retry_write;
|
||||
|
|
|
|||
|
|
@ -807,7 +807,7 @@ static const struct sh_mobile_dt_config r8a7740_dt_config = {
|
|||
static const struct of_device_id sh_mobile_i2c_dt_ids[] = {
|
||||
{ .compatible = "renesas,iic-r8a73a4", .data = &fast_clock_dt_config },
|
||||
{ .compatible = "renesas,iic-r8a7740", .data = &r8a7740_dt_config },
|
||||
{ .compatible = "renesas,iic-r8a774c0", .data = &fast_clock_dt_config },
|
||||
{ .compatible = "renesas,iic-r8a774c0", .data = &v2_freq_calc_dt_config },
|
||||
{ .compatible = "renesas,iic-r8a7790", .data = &v2_freq_calc_dt_config },
|
||||
{ .compatible = "renesas,iic-r8a7791", .data = &v2_freq_calc_dt_config },
|
||||
{ .compatible = "renesas,iic-r8a7792", .data = &v2_freq_calc_dt_config },
|
||||
|
|
|
|||
|
|
@ -616,6 +616,13 @@ static int ad7124_of_parse_channel_config(struct iio_dev *indio_dev,
|
|||
if (ret)
|
||||
goto err;
|
||||
|
||||
if (channel >= indio_dev->num_channels) {
|
||||
dev_err(indio_dev->dev.parent,
|
||||
"Channel index >= number of channels\n");
|
||||
ret = -EINVAL;
|
||||
goto err;
|
||||
}
|
||||
|
||||
ret = of_property_read_u32_array(child, "diff-channels",
|
||||
ain, 2);
|
||||
if (ret)
|
||||
|
|
@ -707,6 +714,11 @@ static int ad7124_setup(struct ad7124_state *st)
|
|||
return ret;
|
||||
}
|
||||
|
||||
static void ad7124_reg_disable(void *r)
|
||||
{
|
||||
regulator_disable(r);
|
||||
}
|
||||
|
||||
static int ad7124_probe(struct spi_device *spi)
|
||||
{
|
||||
const struct ad7124_chip_info *info;
|
||||
|
|
@ -752,17 +764,20 @@ static int ad7124_probe(struct spi_device *spi)
|
|||
ret = regulator_enable(st->vref[i]);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = devm_add_action_or_reset(&spi->dev, ad7124_reg_disable,
|
||||
st->vref[i]);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
st->mclk = devm_clk_get(&spi->dev, "mclk");
|
||||
if (IS_ERR(st->mclk)) {
|
||||
ret = PTR_ERR(st->mclk);
|
||||
goto error_regulator_disable;
|
||||
}
|
||||
if (IS_ERR(st->mclk))
|
||||
return PTR_ERR(st->mclk);
|
||||
|
||||
ret = clk_prepare_enable(st->mclk);
|
||||
if (ret < 0)
|
||||
goto error_regulator_disable;
|
||||
return ret;
|
||||
|
||||
ret = ad7124_soft_reset(st);
|
||||
if (ret < 0)
|
||||
|
|
@ -792,11 +807,6 @@ error_remove_trigger:
|
|||
ad_sd_cleanup_buffer_and_trigger(indio_dev);
|
||||
error_clk_disable_unprepare:
|
||||
clk_disable_unprepare(st->mclk);
|
||||
error_regulator_disable:
|
||||
for (i = ARRAY_SIZE(st->vref) - 1; i >= 0; i--) {
|
||||
if (!IS_ERR_OR_NULL(st->vref[i]))
|
||||
regulator_disable(st->vref[i]);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
|
@ -805,17 +815,11 @@ static int ad7124_remove(struct spi_device *spi)
|
|||
{
|
||||
struct iio_dev *indio_dev = spi_get_drvdata(spi);
|
||||
struct ad7124_state *st = iio_priv(indio_dev);
|
||||
int i;
|
||||
|
||||
iio_device_unregister(indio_dev);
|
||||
ad_sd_cleanup_buffer_and_trigger(indio_dev);
|
||||
clk_disable_unprepare(st->mclk);
|
||||
|
||||
for (i = ARRAY_SIZE(st->vref) - 1; i >= 0; i--) {
|
||||
if (!IS_ERR_OR_NULL(st->vref[i]))
|
||||
regulator_disable(st->vref[i]);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -912,7 +912,7 @@ static int ad7192_probe(struct spi_device *spi)
|
|||
{
|
||||
struct ad7192_state *st;
|
||||
struct iio_dev *indio_dev;
|
||||
int ret, voltage_uv = 0;
|
||||
int ret;
|
||||
|
||||
if (!spi->irq) {
|
||||
dev_err(&spi->dev, "no IRQ?\n");
|
||||
|
|
@ -949,15 +949,12 @@ static int ad7192_probe(struct spi_device *spi)
|
|||
goto error_disable_avdd;
|
||||
}
|
||||
|
||||
voltage_uv = regulator_get_voltage(st->avdd);
|
||||
|
||||
if (voltage_uv > 0) {
|
||||
st->int_vref_mv = voltage_uv / 1000;
|
||||
} else {
|
||||
ret = voltage_uv;
|
||||
ret = regulator_get_voltage(st->avdd);
|
||||
if (ret < 0) {
|
||||
dev_err(&spi->dev, "Device tree error, reference voltage undefined\n");
|
||||
goto error_disable_avdd;
|
||||
}
|
||||
st->int_vref_mv = ret / 1000;
|
||||
|
||||
spi_set_drvdata(spi, indio_dev);
|
||||
st->chip_info = of_device_get_match_data(&spi->dev);
|
||||
|
|
@ -1014,7 +1011,9 @@ static int ad7192_probe(struct spi_device *spi)
|
|||
return 0;
|
||||
|
||||
error_disable_clk:
|
||||
clk_disable_unprepare(st->mclk);
|
||||
if (st->clock_sel == AD7192_CLK_EXT_MCLK1_2 ||
|
||||
st->clock_sel == AD7192_CLK_EXT_MCLK2)
|
||||
clk_disable_unprepare(st->mclk);
|
||||
error_remove_trigger:
|
||||
ad_sd_cleanup_buffer_and_trigger(indio_dev);
|
||||
error_disable_dvdd:
|
||||
|
|
@ -1031,7 +1030,9 @@ static int ad7192_remove(struct spi_device *spi)
|
|||
struct ad7192_state *st = iio_priv(indio_dev);
|
||||
|
||||
iio_device_unregister(indio_dev);
|
||||
clk_disable_unprepare(st->mclk);
|
||||
if (st->clock_sel == AD7192_CLK_EXT_MCLK1_2 ||
|
||||
st->clock_sel == AD7192_CLK_EXT_MCLK2)
|
||||
clk_disable_unprepare(st->mclk);
|
||||
ad_sd_cleanup_buffer_and_trigger(indio_dev);
|
||||
|
||||
regulator_disable(st->dvdd);
|
||||
|
|
|
|||
|
|
@ -166,6 +166,10 @@ struct ad7768_state {
|
|||
* transfer buffers to live in their own cache lines.
|
||||
*/
|
||||
union {
|
||||
struct {
|
||||
__be32 chan;
|
||||
s64 timestamp;
|
||||
} scan;
|
||||
__be32 d32;
|
||||
u8 d8[2];
|
||||
} data ____cacheline_aligned;
|
||||
|
|
@ -459,11 +463,11 @@ static irqreturn_t ad7768_trigger_handler(int irq, void *p)
|
|||
|
||||
mutex_lock(&st->lock);
|
||||
|
||||
ret = spi_read(st->spi, &st->data.d32, 3);
|
||||
ret = spi_read(st->spi, &st->data.scan.chan, 3);
|
||||
if (ret < 0)
|
||||
goto err_unlock;
|
||||
|
||||
iio_push_to_buffers_with_timestamp(indio_dev, &st->data.d32,
|
||||
iio_push_to_buffers_with_timestamp(indio_dev, &st->data.scan,
|
||||
iio_get_time_ns(indio_dev));
|
||||
|
||||
iio_trigger_notify_done(indio_dev->trig);
|
||||
|
|
|
|||
|
|
@ -279,6 +279,7 @@ static int ad7793_setup(struct iio_dev *indio_dev,
|
|||
id &= AD7793_ID_MASK;
|
||||
|
||||
if (id != st->chip_info->id) {
|
||||
ret = -ENODEV;
|
||||
dev_err(&st->sd.spi->dev, "device ID query failed\n");
|
||||
goto out;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -59,8 +59,10 @@ struct ad7923_state {
|
|||
/*
|
||||
* DMA (thus cache coherency maintenance) requires the
|
||||
* transfer buffers to live in their own cache lines.
|
||||
* Ensure rx_buf can be directly used in iio_push_to_buffers_with_timetamp
|
||||
* Length = 8 channels + 4 extra for 8 byte timestamp
|
||||
*/
|
||||
__be16 rx_buf[4] ____cacheline_aligned;
|
||||
__be16 rx_buf[12] ____cacheline_aligned;
|
||||
__be16 tx_buf[4];
|
||||
};
|
||||
|
||||
|
|
|
|||
|
|
@ -524,23 +524,29 @@ static int ad5770r_channel_config(struct ad5770r_state *st)
|
|||
device_for_each_child_node(&st->spi->dev, child) {
|
||||
ret = fwnode_property_read_u32(child, "num", &num);
|
||||
if (ret)
|
||||
return ret;
|
||||
if (num >= AD5770R_MAX_CHANNELS)
|
||||
return -EINVAL;
|
||||
goto err_child_out;
|
||||
if (num >= AD5770R_MAX_CHANNELS) {
|
||||
ret = -EINVAL;
|
||||
goto err_child_out;
|
||||
}
|
||||
|
||||
ret = fwnode_property_read_u32_array(child,
|
||||
"adi,range-microamp",
|
||||
tmp, 2);
|
||||
if (ret)
|
||||
return ret;
|
||||
goto err_child_out;
|
||||
|
||||
min = tmp[0] / 1000;
|
||||
max = tmp[1] / 1000;
|
||||
ret = ad5770r_store_output_range(st, min, max, num);
|
||||
if (ret)
|
||||
return ret;
|
||||
goto err_child_out;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
err_child_out:
|
||||
fwnode_handle_put(child);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -399,6 +399,7 @@ static int fxas21002c_temp_get(struct fxas21002c_data *data, int *val)
|
|||
ret = regmap_field_read(data->regmap_fields[F_TEMP], &temp);
|
||||
if (ret < 0) {
|
||||
dev_err(dev, "failed to read temp: %d\n", ret);
|
||||
fxas21002c_pm_put(data);
|
||||
goto data_unlock;
|
||||
}
|
||||
|
||||
|
|
@ -432,6 +433,7 @@ static int fxas21002c_axis_get(struct fxas21002c_data *data,
|
|||
&axis_be, sizeof(axis_be));
|
||||
if (ret < 0) {
|
||||
dev_err(dev, "failed to read axis: %d: %d\n", index, ret);
|
||||
fxas21002c_pm_put(data);
|
||||
goto data_unlock;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Copyright (c) 2020, The Linux Foundation. All rights reserved.
|
||||
* Copyright (c) 2020-2021, The Linux Foundation. All rights reserved.
|
||||
*/
|
||||
|
||||
#include <asm/div64.h>
|
||||
|
|
@ -212,6 +212,7 @@ struct bcm_voter *of_bcm_voter_get(struct device *dev, const char *name)
|
|||
}
|
||||
mutex_unlock(&bcm_voter_lock);
|
||||
|
||||
of_node_put(node);
|
||||
return voter;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(of_bcm_voter_get);
|
||||
|
|
@ -369,6 +370,7 @@ static const struct of_device_id bcm_voter_of_match[] = {
|
|||
{ .compatible = "qcom,bcm-voter" },
|
||||
{ }
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, bcm_voter_of_match);
|
||||
|
||||
static struct platform_driver qcom_icc_bcm_voter_driver = {
|
||||
.probe = qcom_icc_bcm_voter_probe,
|
||||
|
|
|
|||
|
|
@ -1137,7 +1137,7 @@ static int alloc_iommu(struct dmar_drhd_unit *drhd)
|
|||
|
||||
err = iommu_device_register(&iommu->iommu);
|
||||
if (err)
|
||||
goto err_unmap;
|
||||
goto err_sysfs;
|
||||
}
|
||||
|
||||
drhd->iommu = iommu;
|
||||
|
|
@ -1145,6 +1145,8 @@ static int alloc_iommu(struct dmar_drhd_unit *drhd)
|
|||
|
||||
return 0;
|
||||
|
||||
err_sysfs:
|
||||
iommu_device_sysfs_remove(&iommu->iommu);
|
||||
err_unmap:
|
||||
unmap_iommu(iommu);
|
||||
error_free_seq_id:
|
||||
|
|
|
|||
|
|
@ -2606,9 +2606,9 @@ static int domain_setup_first_level(struct intel_iommu *iommu,
|
|||
struct device *dev,
|
||||
u32 pasid)
|
||||
{
|
||||
int flags = PASID_FLAG_SUPERVISOR_MODE;
|
||||
struct dma_pte *pgd = domain->pgd;
|
||||
int agaw, level;
|
||||
int flags = 0;
|
||||
|
||||
/*
|
||||
* Skip top levels of page tables for iommu which has
|
||||
|
|
@ -2624,7 +2624,10 @@ static int domain_setup_first_level(struct intel_iommu *iommu,
|
|||
if (level != 4 && level != 5)
|
||||
return -EINVAL;
|
||||
|
||||
flags |= (level == 5) ? PASID_FLAG_FL5LP : 0;
|
||||
if (pasid != PASID_RID2PASID)
|
||||
flags |= PASID_FLAG_SUPERVISOR_MODE;
|
||||
if (level == 5)
|
||||
flags |= PASID_FLAG_FL5LP;
|
||||
|
||||
if (domain->domain.type == IOMMU_DOMAIN_UNMANAGED)
|
||||
flags |= PASID_FLAG_PAGE_SNOOP;
|
||||
|
|
|
|||
|
|
@ -677,7 +677,8 @@ int intel_pasid_setup_second_level(struct intel_iommu *iommu,
|
|||
* Since it is a second level only translation setup, we should
|
||||
* set SRE bit as well (addresses are expected to be GPAs).
|
||||
*/
|
||||
pasid_set_sre(pte);
|
||||
if (pasid != PASID_RID2PASID)
|
||||
pasid_set_sre(pte);
|
||||
pasid_set_present(pte);
|
||||
pasid_flush_caches(iommu, pte, pasid, did);
|
||||
|
||||
|
|
|
|||
|
|
@ -1138,6 +1138,7 @@ static struct virtio_device_id id_table[] = {
|
|||
{ VIRTIO_ID_IOMMU, VIRTIO_DEV_ANY_ID },
|
||||
{ 0 },
|
||||
};
|
||||
MODULE_DEVICE_TABLE(virtio, id_table);
|
||||
|
||||
static struct virtio_driver virtio_iommu_drv = {
|
||||
.driver.name = KBUILD_MODNAME,
|
||||
|
|
|
|||
|
|
@ -46,7 +46,7 @@ static void hfcsusb_start_endpoint(struct hfcsusb *hw, int channel);
|
|||
static void hfcsusb_stop_endpoint(struct hfcsusb *hw, int channel);
|
||||
static int hfcsusb_setup_bch(struct bchannel *bch, int protocol);
|
||||
static void deactivate_bchannel(struct bchannel *bch);
|
||||
static void hfcsusb_ph_info(struct hfcsusb *hw);
|
||||
static int hfcsusb_ph_info(struct hfcsusb *hw);
|
||||
|
||||
/* start next background transfer for control channel */
|
||||
static void
|
||||
|
|
@ -241,7 +241,7 @@ hfcusb_l2l1B(struct mISDNchannel *ch, struct sk_buff *skb)
|
|||
* send full D/B channel status information
|
||||
* as MPH_INFORMATION_IND
|
||||
*/
|
||||
static void
|
||||
static int
|
||||
hfcsusb_ph_info(struct hfcsusb *hw)
|
||||
{
|
||||
struct ph_info *phi;
|
||||
|
|
@ -250,7 +250,7 @@ hfcsusb_ph_info(struct hfcsusb *hw)
|
|||
|
||||
phi = kzalloc(struct_size(phi, bch, dch->dev.nrbchan), GFP_ATOMIC);
|
||||
if (!phi)
|
||||
return;
|
||||
return -ENOMEM;
|
||||
|
||||
phi->dch.ch.protocol = hw->protocol;
|
||||
phi->dch.ch.Flags = dch->Flags;
|
||||
|
|
@ -263,6 +263,8 @@ hfcsusb_ph_info(struct hfcsusb *hw)
|
|||
_queue_data(&dch->dev.D, MPH_INFORMATION_IND, MISDN_ID_ANY,
|
||||
struct_size(phi, bch, dch->dev.nrbchan), phi, GFP_ATOMIC);
|
||||
kfree(phi);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
@ -347,8 +349,7 @@ hfcusb_l2l1D(struct mISDNchannel *ch, struct sk_buff *skb)
|
|||
ret = l1_event(dch->l1, hh->prim);
|
||||
break;
|
||||
case MPH_INFORMATION_REQ:
|
||||
hfcsusb_ph_info(hw);
|
||||
ret = 0;
|
||||
ret = hfcsusb_ph_info(hw);
|
||||
break;
|
||||
}
|
||||
|
||||
|
|
@ -403,8 +404,7 @@ hfc_l1callback(struct dchannel *dch, u_int cmd)
|
|||
hw->name, __func__, cmd);
|
||||
return -1;
|
||||
}
|
||||
hfcsusb_ph_info(hw);
|
||||
return 0;
|
||||
return hfcsusb_ph_info(hw);
|
||||
}
|
||||
|
||||
static int
|
||||
|
|
@ -746,8 +746,7 @@ hfcsusb_setup_bch(struct bchannel *bch, int protocol)
|
|||
handle_led(hw, (bch->nr == 1) ? LED_B1_OFF :
|
||||
LED_B2_OFF);
|
||||
}
|
||||
hfcsusb_ph_info(hw);
|
||||
return 0;
|
||||
return hfcsusb_ph_info(hw);
|
||||
}
|
||||
|
||||
static void
|
||||
|
|
|
|||
|
|
@ -630,17 +630,19 @@ static void
|
|||
release_io(struct inf_hw *hw)
|
||||
{
|
||||
if (hw->cfg.mode) {
|
||||
if (hw->cfg.p) {
|
||||
if (hw->cfg.mode == AM_MEMIO) {
|
||||
release_mem_region(hw->cfg.start, hw->cfg.size);
|
||||
iounmap(hw->cfg.p);
|
||||
if (hw->cfg.p)
|
||||
iounmap(hw->cfg.p);
|
||||
} else
|
||||
release_region(hw->cfg.start, hw->cfg.size);
|
||||
hw->cfg.mode = AM_NONE;
|
||||
}
|
||||
if (hw->addr.mode) {
|
||||
if (hw->addr.p) {
|
||||
if (hw->addr.mode == AM_MEMIO) {
|
||||
release_mem_region(hw->addr.start, hw->addr.size);
|
||||
iounmap(hw->addr.p);
|
||||
if (hw->addr.p)
|
||||
iounmap(hw->addr.p);
|
||||
} else
|
||||
release_region(hw->addr.start, hw->addr.size);
|
||||
hw->addr.mode = AM_NONE;
|
||||
|
|
@ -670,9 +672,12 @@ setup_io(struct inf_hw *hw)
|
|||
(ulong)hw->cfg.start, (ulong)hw->cfg.size);
|
||||
return err;
|
||||
}
|
||||
if (hw->ci->cfg_mode == AM_MEMIO)
|
||||
hw->cfg.p = ioremap(hw->cfg.start, hw->cfg.size);
|
||||
hw->cfg.mode = hw->ci->cfg_mode;
|
||||
if (hw->ci->cfg_mode == AM_MEMIO) {
|
||||
hw->cfg.p = ioremap(hw->cfg.start, hw->cfg.size);
|
||||
if (!hw->cfg.p)
|
||||
return -ENOMEM;
|
||||
}
|
||||
if (debug & DEBUG_HW)
|
||||
pr_notice("%s: IO cfg %lx (%lu bytes) mode%d\n",
|
||||
hw->name, (ulong)hw->cfg.start,
|
||||
|
|
@ -697,12 +702,12 @@ setup_io(struct inf_hw *hw)
|
|||
(ulong)hw->addr.start, (ulong)hw->addr.size);
|
||||
return err;
|
||||
}
|
||||
hw->addr.mode = hw->ci->addr_mode;
|
||||
if (hw->ci->addr_mode == AM_MEMIO) {
|
||||
hw->addr.p = ioremap(hw->addr.start, hw->addr.size);
|
||||
if (unlikely(!hw->addr.p))
|
||||
if (!hw->addr.p)
|
||||
return -ENOMEM;
|
||||
}
|
||||
hw->addr.mode = hw->ci->addr_mode;
|
||||
if (debug & DEBUG_HW)
|
||||
pr_notice("%s: IO addr %lx (%lu bytes) mode%d\n",
|
||||
hw->name, (ulong)hw->addr.start,
|
||||
|
|
|
|||
|
|
@ -854,7 +854,7 @@ static int dm_add_exception(void *context, chunk_t old, chunk_t new)
|
|||
static uint32_t __minimum_chunk_size(struct origin *o)
|
||||
{
|
||||
struct dm_snapshot *snap;
|
||||
unsigned chunk_size = 0;
|
||||
unsigned chunk_size = rounddown_pow_of_two(UINT_MAX);
|
||||
|
||||
if (o)
|
||||
list_for_each_entry(snap, &o->snapshots, list)
|
||||
|
|
|
|||
|
|
@ -281,7 +281,7 @@ static int sp8870_set_frontend_parameters(struct dvb_frontend *fe)
|
|||
|
||||
// read status reg in order to clear pending irqs
|
||||
err = sp8870_readreg(state, 0x200);
|
||||
if (err)
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
// system controller start
|
||||
|
|
|
|||
|
|
@ -1424,7 +1424,6 @@ static int sd_config(struct gspca_dev *gspca_dev,
|
|||
{
|
||||
struct sd *sd = (struct sd *) gspca_dev;
|
||||
struct cam *cam;
|
||||
int ret;
|
||||
|
||||
sd->mainsFreq = FREQ_DEF == V4L2_CID_POWER_LINE_FREQUENCY_60HZ;
|
||||
reset_camera_params(gspca_dev);
|
||||
|
|
@ -1436,10 +1435,7 @@ static int sd_config(struct gspca_dev *gspca_dev,
|
|||
cam->cam_mode = mode;
|
||||
cam->nmodes = ARRAY_SIZE(mode);
|
||||
|
||||
ret = goto_low_power(gspca_dev);
|
||||
if (ret)
|
||||
gspca_err(gspca_dev, "Cannot go to low power mode: %d\n",
|
||||
ret);
|
||||
goto_low_power(gspca_dev);
|
||||
/* Check the firmware version. */
|
||||
sd->params.version.firmwareVersion = 0;
|
||||
get_version_information(gspca_dev);
|
||||
|
|
|
|||
|
|
@ -195,7 +195,7 @@ static const struct v4l2_ctrl_config mt9m111_greenbal_cfg = {
|
|||
int mt9m111_probe(struct sd *sd)
|
||||
{
|
||||
u8 data[2] = {0x00, 0x00};
|
||||
int i, rc = 0;
|
||||
int i, err;
|
||||
struct gspca_dev *gspca_dev = (struct gspca_dev *)sd;
|
||||
|
||||
if (force_sensor) {
|
||||
|
|
@ -213,18 +213,18 @@ int mt9m111_probe(struct sd *sd)
|
|||
/* Do the preinit */
|
||||
for (i = 0; i < ARRAY_SIZE(preinit_mt9m111); i++) {
|
||||
if (preinit_mt9m111[i][0] == BRIDGE) {
|
||||
rc |= m5602_write_bridge(sd,
|
||||
preinit_mt9m111[i][1],
|
||||
preinit_mt9m111[i][2]);
|
||||
err = m5602_write_bridge(sd,
|
||||
preinit_mt9m111[i][1],
|
||||
preinit_mt9m111[i][2]);
|
||||
} else {
|
||||
data[0] = preinit_mt9m111[i][2];
|
||||
data[1] = preinit_mt9m111[i][3];
|
||||
rc |= m5602_write_sensor(sd,
|
||||
preinit_mt9m111[i][1], data, 2);
|
||||
err = m5602_write_sensor(sd,
|
||||
preinit_mt9m111[i][1], data, 2);
|
||||
}
|
||||
if (err < 0)
|
||||
return err;
|
||||
}
|
||||
if (rc < 0)
|
||||
return rc;
|
||||
|
||||
if (m5602_read_sensor(sd, MT9M111_SC_CHIPVER, data, 2))
|
||||
return -ENODEV;
|
||||
|
|
|
|||
|
|
@ -154,8 +154,8 @@ static const struct v4l2_ctrl_config po1030_greenbal_cfg = {
|
|||
|
||||
int po1030_probe(struct sd *sd)
|
||||
{
|
||||
int rc = 0;
|
||||
u8 dev_id_h = 0, i;
|
||||
int err;
|
||||
struct gspca_dev *gspca_dev = (struct gspca_dev *)sd;
|
||||
|
||||
if (force_sensor) {
|
||||
|
|
@ -174,14 +174,14 @@ int po1030_probe(struct sd *sd)
|
|||
for (i = 0; i < ARRAY_SIZE(preinit_po1030); i++) {
|
||||
u8 data = preinit_po1030[i][2];
|
||||
if (preinit_po1030[i][0] == SENSOR)
|
||||
rc |= m5602_write_sensor(sd,
|
||||
preinit_po1030[i][1], &data, 1);
|
||||
err = m5602_write_sensor(sd, preinit_po1030[i][1],
|
||||
&data, 1);
|
||||
else
|
||||
rc |= m5602_write_bridge(sd, preinit_po1030[i][1],
|
||||
data);
|
||||
err = m5602_write_bridge(sd, preinit_po1030[i][1],
|
||||
data);
|
||||
if (err < 0)
|
||||
return err;
|
||||
}
|
||||
if (rc < 0)
|
||||
return rc;
|
||||
|
||||
if (m5602_read_sensor(sd, PO1030_DEVID_H, &dev_id_h, 1))
|
||||
return -ENODEV;
|
||||
|
|
|
|||
|
|
@ -100,8 +100,9 @@
|
|||
printk(KERN_INFO a); \
|
||||
} while (0)
|
||||
#define v2printk(a...) do { \
|
||||
if (verbose > 1) \
|
||||
if (verbose > 1) { \
|
||||
printk(KERN_INFO a); \
|
||||
} \
|
||||
touch_nmi_watchdog(); \
|
||||
} while (0)
|
||||
#define eprintk(a...) do { \
|
||||
|
|
|
|||
|
|
@ -271,6 +271,7 @@ struct lis3lv02d {
|
|||
int regs_size;
|
||||
u8 *reg_cache;
|
||||
bool regs_stored;
|
||||
bool init_required;
|
||||
u8 odr_mask; /* ODR bit mask */
|
||||
u8 whoami; /* indicates measurement precision */
|
||||
s16 (*read_data) (struct lis3lv02d *lis3, int reg);
|
||||
|
|
|
|||
|
|
@ -277,6 +277,9 @@ static int mei_cl_irq_read(struct mei_cl *cl, struct mei_cl_cb *cb,
|
|||
return ret;
|
||||
}
|
||||
|
||||
pm_runtime_mark_last_busy(dev->dev);
|
||||
pm_request_autosuspend(dev->dev);
|
||||
|
||||
list_move_tail(&cb->list, &cl->rd_pending);
|
||||
|
||||
return 0;
|
||||
|
|
|
|||
|
|
@ -270,9 +270,6 @@ static netdev_tx_t caif_xmit(struct sk_buff *skb, struct net_device *dev)
|
|||
{
|
||||
struct ser_device *ser;
|
||||
|
||||
if (WARN_ON(!dev))
|
||||
return -EINVAL;
|
||||
|
||||
ser = netdev_priv(dev);
|
||||
|
||||
/* Send flow off once, on high water mark */
|
||||
|
|
|
|||
|
|
@ -1128,14 +1128,6 @@ mt7530_port_set_vlan_aware(struct dsa_switch *ds, int port)
|
|||
{
|
||||
struct mt7530_priv *priv = ds->priv;
|
||||
|
||||
/* The real fabric path would be decided on the membership in the
|
||||
* entry of VLAN table. PCR_MATRIX set up here with ALL_MEMBERS
|
||||
* means potential VLAN can be consisting of certain subset of all
|
||||
* ports.
|
||||
*/
|
||||
mt7530_rmw(priv, MT7530_PCR_P(port),
|
||||
PCR_MATRIX_MASK, PCR_MATRIX(MT7530_ALL_MEMBERS));
|
||||
|
||||
/* Trapped into security mode allows packet forwarding through VLAN
|
||||
* table lookup. CPU port is set to fallback mode to let untagged
|
||||
* frames pass through.
|
||||
|
|
|
|||
|
|
@ -167,9 +167,10 @@ enum sja1105_hostcmd {
|
|||
SJA1105_HOSTCMD_INVALIDATE = 4,
|
||||
};
|
||||
|
||||
/* Command and entry overlap */
|
||||
static void
|
||||
sja1105_vl_lookup_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
|
||||
enum packing_op op)
|
||||
sja1105et_vl_lookup_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
|
||||
enum packing_op op)
|
||||
{
|
||||
const int size = SJA1105_SIZE_DYN_CMD;
|
||||
|
||||
|
|
@ -179,6 +180,20 @@ sja1105_vl_lookup_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
|
|||
sja1105_packing(buf, &cmd->index, 9, 0, size, op);
|
||||
}
|
||||
|
||||
/* Command and entry are separate */
|
||||
static void
|
||||
sja1105pqrs_vl_lookup_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
|
||||
enum packing_op op)
|
||||
{
|
||||
u8 *p = buf + SJA1105_SIZE_VL_LOOKUP_ENTRY;
|
||||
const int size = SJA1105_SIZE_DYN_CMD;
|
||||
|
||||
sja1105_packing(p, &cmd->valid, 31, 31, size, op);
|
||||
sja1105_packing(p, &cmd->errors, 30, 30, size, op);
|
||||
sja1105_packing(p, &cmd->rdwrset, 29, 29, size, op);
|
||||
sja1105_packing(p, &cmd->index, 9, 0, size, op);
|
||||
}
|
||||
|
||||
static size_t sja1105et_vl_lookup_entry_packing(void *buf, void *entry_ptr,
|
||||
enum packing_op op)
|
||||
{
|
||||
|
|
@ -641,7 +656,7 @@ static size_t sja1105pqrs_cbs_entry_packing(void *buf, void *entry_ptr,
|
|||
const struct sja1105_dynamic_table_ops sja1105et_dyn_ops[BLK_IDX_MAX_DYN] = {
|
||||
[BLK_IDX_VL_LOOKUP] = {
|
||||
.entry_packing = sja1105et_vl_lookup_entry_packing,
|
||||
.cmd_packing = sja1105_vl_lookup_cmd_packing,
|
||||
.cmd_packing = sja1105et_vl_lookup_cmd_packing,
|
||||
.access = OP_WRITE,
|
||||
.max_entry_count = SJA1105_MAX_VL_LOOKUP_COUNT,
|
||||
.packed_size = SJA1105ET_SIZE_VL_LOOKUP_DYN_CMD,
|
||||
|
|
@ -725,7 +740,7 @@ const struct sja1105_dynamic_table_ops sja1105et_dyn_ops[BLK_IDX_MAX_DYN] = {
|
|||
const struct sja1105_dynamic_table_ops sja1105pqrs_dyn_ops[BLK_IDX_MAX_DYN] = {
|
||||
[BLK_IDX_VL_LOOKUP] = {
|
||||
.entry_packing = sja1105_vl_lookup_entry_packing,
|
||||
.cmd_packing = sja1105_vl_lookup_cmd_packing,
|
||||
.cmd_packing = sja1105pqrs_vl_lookup_cmd_packing,
|
||||
.access = (OP_READ | OP_WRITE),
|
||||
.max_entry_count = SJA1105_MAX_VL_LOOKUP_COUNT,
|
||||
.packed_size = SJA1105PQRS_SIZE_VL_LOOKUP_DYN_CMD,
|
||||
|
|
|
|||
|
|
@ -25,6 +25,8 @@
|
|||
#include "sja1105_sgmii.h"
|
||||
#include "sja1105_tas.h"
|
||||
|
||||
#define SJA1105_DEFAULT_VLAN (VLAN_N_VID - 1)
|
||||
|
||||
static const struct dsa_switch_ops sja1105_switch_ops;
|
||||
|
||||
static void sja1105_hw_reset(struct gpio_desc *gpio, unsigned int pulse_len,
|
||||
|
|
@ -204,6 +206,7 @@ static int sja1105_init_mii_settings(struct sja1105_private *priv,
|
|||
default:
|
||||
dev_err(dev, "Unsupported PHY mode %s!\n",
|
||||
phy_modes(ports[i].phy_mode));
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* Even though the SerDes port is able to drive SGMII autoneg
|
||||
|
|
@ -292,6 +295,13 @@ static int sja1105_init_l2_lookup_params(struct sja1105_private *priv)
|
|||
return 0;
|
||||
}
|
||||
|
||||
/* Set up a default VLAN for untagged traffic injected from the CPU
|
||||
* using management routes (e.g. STP, PTP) as opposed to tag_8021q.
|
||||
* All DT-defined ports are members of this VLAN, and there are no
|
||||
* restrictions on forwarding (since the CPU selects the destination).
|
||||
* Frames from this VLAN will always be transmitted as untagged, and
|
||||
* neither the bridge nor the 8021q module cannot create this VLAN ID.
|
||||
*/
|
||||
static int sja1105_init_static_vlan(struct sja1105_private *priv)
|
||||
{
|
||||
struct sja1105_table *table;
|
||||
|
|
@ -301,17 +311,13 @@ static int sja1105_init_static_vlan(struct sja1105_private *priv)
|
|||
.vmemb_port = 0,
|
||||
.vlan_bc = 0,
|
||||
.tag_port = 0,
|
||||
.vlanid = 1,
|
||||
.vlanid = SJA1105_DEFAULT_VLAN,
|
||||
};
|
||||
struct dsa_switch *ds = priv->ds;
|
||||
int port;
|
||||
|
||||
table = &priv->static_config.tables[BLK_IDX_VLAN_LOOKUP];
|
||||
|
||||
/* The static VLAN table will only contain the initial pvid of 1.
|
||||
* All other VLANs are to be configured through dynamic entries,
|
||||
* and kept in the static configuration table as backing memory.
|
||||
*/
|
||||
if (table->entry_count) {
|
||||
kfree(table->entries);
|
||||
table->entry_count = 0;
|
||||
|
|
@ -324,9 +330,6 @@ static int sja1105_init_static_vlan(struct sja1105_private *priv)
|
|||
|
||||
table->entry_count = 1;
|
||||
|
||||
/* VLAN 1: all DT-defined ports are members; no restrictions on
|
||||
* forwarding; always transmit as untagged.
|
||||
*/
|
||||
for (port = 0; port < ds->num_ports; port++) {
|
||||
struct sja1105_bridge_vlan *v;
|
||||
|
||||
|
|
@ -337,15 +340,12 @@ static int sja1105_init_static_vlan(struct sja1105_private *priv)
|
|||
pvid.vlan_bc |= BIT(port);
|
||||
pvid.tag_port &= ~BIT(port);
|
||||
|
||||
/* Let traffic that don't need dsa_8021q (e.g. STP, PTP) be
|
||||
* transmitted as untagged.
|
||||
*/
|
||||
v = kzalloc(sizeof(*v), GFP_KERNEL);
|
||||
if (!v)
|
||||
return -ENOMEM;
|
||||
|
||||
v->port = port;
|
||||
v->vid = 1;
|
||||
v->vid = SJA1105_DEFAULT_VLAN;
|
||||
v->untagged = true;
|
||||
if (dsa_is_cpu_port(ds, port))
|
||||
v->pvid = true;
|
||||
|
|
@ -2756,11 +2756,22 @@ static int sja1105_vlan_add_one(struct dsa_switch *ds, int port, u16 vid,
|
|||
bool pvid = flags & BRIDGE_VLAN_INFO_PVID;
|
||||
struct sja1105_bridge_vlan *v;
|
||||
|
||||
list_for_each_entry(v, vlan_list, list)
|
||||
if (v->port == port && v->vid == vid &&
|
||||
v->untagged == untagged && v->pvid == pvid)
|
||||
list_for_each_entry(v, vlan_list, list) {
|
||||
if (v->port == port && v->vid == vid) {
|
||||
/* Already added */
|
||||
return 0;
|
||||
if (v->untagged == untagged && v->pvid == pvid)
|
||||
/* Nothing changed */
|
||||
return 0;
|
||||
|
||||
/* It's the same VLAN, but some of the flags changed
|
||||
* and the user did not bother to delete it first.
|
||||
* Update it and trigger sja1105_build_vlan_table.
|
||||
*/
|
||||
v->untagged = untagged;
|
||||
v->pvid = pvid;
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
|
||||
v = kzalloc(sizeof(*v), GFP_KERNEL);
|
||||
if (!v) {
|
||||
|
|
@ -2911,13 +2922,13 @@ static int sja1105_setup(struct dsa_switch *ds)
|
|||
rc = sja1105_static_config_load(priv, ports);
|
||||
if (rc < 0) {
|
||||
dev_err(ds->dev, "Failed to load static config: %d\n", rc);
|
||||
return rc;
|
||||
goto out_ptp_clock_unregister;
|
||||
}
|
||||
/* Configure the CGU (PHY link modes and speeds) */
|
||||
rc = sja1105_clocking_setup(priv);
|
||||
if (rc < 0) {
|
||||
dev_err(ds->dev, "Failed to configure MII clocking: %d\n", rc);
|
||||
return rc;
|
||||
goto out_static_config_free;
|
||||
}
|
||||
/* On SJA1105, VLAN filtering per se is always enabled in hardware.
|
||||
* The only thing we can do to disable it is lie about what the 802.1Q
|
||||
|
|
@ -2938,7 +2949,7 @@ static int sja1105_setup(struct dsa_switch *ds)
|
|||
|
||||
rc = sja1105_devlink_setup(ds);
|
||||
if (rc < 0)
|
||||
return rc;
|
||||
goto out_static_config_free;
|
||||
|
||||
/* The DSA/switchdev model brings up switch ports in standalone mode by
|
||||
* default, and that means vlan_filtering is 0 since they're not under
|
||||
|
|
@ -2947,6 +2958,17 @@ static int sja1105_setup(struct dsa_switch *ds)
|
|||
rtnl_lock();
|
||||
rc = sja1105_setup_8021q_tagging(ds, true);
|
||||
rtnl_unlock();
|
||||
if (rc)
|
||||
goto out_devlink_teardown;
|
||||
|
||||
return 0;
|
||||
|
||||
out_devlink_teardown:
|
||||
sja1105_devlink_teardown(ds);
|
||||
out_ptp_clock_unregister:
|
||||
sja1105_ptp_clock_unregister(ds);
|
||||
out_static_config_free:
|
||||
sja1105_static_config_free(&priv->static_config);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
|
@ -3461,8 +3483,10 @@ static int sja1105_probe(struct spi_device *spi)
|
|||
priv->cbs = devm_kcalloc(dev, priv->info->num_cbs_shapers,
|
||||
sizeof(struct sja1105_cbs_entry),
|
||||
GFP_KERNEL);
|
||||
if (!priv->cbs)
|
||||
return -ENOMEM;
|
||||
if (!priv->cbs) {
|
||||
rc = -ENOMEM;
|
||||
goto out_unregister_switch;
|
||||
}
|
||||
}
|
||||
|
||||
/* Connections between dsa_port and sja1105_port */
|
||||
|
|
@ -3487,7 +3511,7 @@ static int sja1105_probe(struct spi_device *spi)
|
|||
dev_err(ds->dev,
|
||||
"failed to create deferred xmit thread: %d\n",
|
||||
rc);
|
||||
goto out;
|
||||
goto out_destroy_workers;
|
||||
}
|
||||
skb_queue_head_init(&sp->xmit_queue);
|
||||
sp->xmit_tpid = ETH_P_SJA1105;
|
||||
|
|
@ -3497,7 +3521,8 @@ static int sja1105_probe(struct spi_device *spi)
|
|||
}
|
||||
|
||||
return 0;
|
||||
out:
|
||||
|
||||
out_destroy_workers:
|
||||
while (port-- > 0) {
|
||||
struct sja1105_port *sp = &priv->ports[port];
|
||||
|
||||
|
|
@ -3506,6 +3531,10 @@ out:
|
|||
|
||||
kthread_destroy_worker(sp->xmit_worker);
|
||||
}
|
||||
|
||||
out_unregister_switch:
|
||||
dsa_unregister_switch(ds);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -8247,9 +8247,9 @@ bnx2_init_board(struct pci_dev *pdev, struct net_device *dev)
|
|||
BNX2_WR(bp, PCI_COMMAND, reg);
|
||||
} else if ((BNX2_CHIP_ID(bp) == BNX2_CHIP_ID_5706_A1) &&
|
||||
!(bp->flags & BNX2_FLAG_PCIX)) {
|
||||
|
||||
dev_err(&pdev->dev,
|
||||
"5706 A1 can only be used in a PCIX bus, aborting\n");
|
||||
rc = -EPERM;
|
||||
goto err_out_unmap;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -280,7 +280,8 @@ static bool bnxt_vf_pciid(enum board_idx idx)
|
|||
{
|
||||
return (idx == NETXTREME_C_VF || idx == NETXTREME_E_VF ||
|
||||
idx == NETXTREME_S_VF || idx == NETXTREME_C_VF_HV ||
|
||||
idx == NETXTREME_E_VF_HV || idx == NETXTREME_E_P5_VF);
|
||||
idx == NETXTREME_E_VF_HV || idx == NETXTREME_E_P5_VF ||
|
||||
idx == NETXTREME_E_P5_VF_HV);
|
||||
}
|
||||
|
||||
#define DB_CP_REARM_FLAGS (DB_KEY_CP | DB_IDX_VALID)
|
||||
|
|
@ -6833,14 +6834,7 @@ ctx_err:
|
|||
static void bnxt_hwrm_set_pg_attr(struct bnxt_ring_mem_info *rmem, u8 *pg_attr,
|
||||
__le64 *pg_dir)
|
||||
{
|
||||
u8 pg_size = 0;
|
||||
|
||||
if (BNXT_PAGE_SHIFT == 13)
|
||||
pg_size = 1 << 4;
|
||||
else if (BNXT_PAGE_SIZE == 16)
|
||||
pg_size = 2 << 4;
|
||||
|
||||
*pg_attr = pg_size;
|
||||
BNXT_SET_CTX_PAGE_ATTR(*pg_attr);
|
||||
if (rmem->depth >= 1) {
|
||||
if (rmem->depth == 2)
|
||||
*pg_attr |= 2;
|
||||
|
|
|
|||
|
|
@ -1440,6 +1440,16 @@ struct bnxt_ctx_pg_info {
|
|||
#define BNXT_MAX_TQM_RINGS \
|
||||
(BNXT_MAX_TQM_SP_RINGS + BNXT_MAX_TQM_FP_RINGS)
|
||||
|
||||
#define BNXT_SET_CTX_PAGE_ATTR(attr) \
|
||||
do { \
|
||||
if (BNXT_PAGE_SIZE == 0x2000) \
|
||||
attr = FUNC_BACKING_STORE_CFG_REQ_SRQ_PG_SIZE_PG_8K; \
|
||||
else if (BNXT_PAGE_SIZE == 0x10000) \
|
||||
attr = FUNC_BACKING_STORE_CFG_REQ_QPC_PG_SIZE_PG_64K; \
|
||||
else \
|
||||
attr = FUNC_BACKING_STORE_CFG_REQ_QPC_PG_SIZE_PG_4K; \
|
||||
} while (0)
|
||||
|
||||
struct bnxt_ctx_mem_info {
|
||||
u32 qp_max_entries;
|
||||
u16 qp_min_qp1_entries;
|
||||
|
|
|
|||
|
|
@ -1153,7 +1153,7 @@ static void octeon_destroy_resources(struct octeon_device *oct)
|
|||
* @lio: per-network private data
|
||||
* @start_stop: whether to start or stop
|
||||
*/
|
||||
static void send_rx_ctrl_cmd(struct lio *lio, int start_stop)
|
||||
static int send_rx_ctrl_cmd(struct lio *lio, int start_stop)
|
||||
{
|
||||
struct octeon_soft_command *sc;
|
||||
union octnet_cmd *ncmd;
|
||||
|
|
@ -1161,15 +1161,15 @@ static void send_rx_ctrl_cmd(struct lio *lio, int start_stop)
|
|||
int retval;
|
||||
|
||||
if (oct->props[lio->ifidx].rx_on == start_stop)
|
||||
return;
|
||||
return 0;
|
||||
|
||||
sc = (struct octeon_soft_command *)
|
||||
octeon_alloc_soft_command(oct, OCTNET_CMD_SIZE,
|
||||
16, 0);
|
||||
if (!sc) {
|
||||
netif_info(lio, rx_err, lio->netdev,
|
||||
"Failed to allocate octeon_soft_command\n");
|
||||
return;
|
||||
"Failed to allocate octeon_soft_command struct\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
ncmd = (union octnet_cmd *)sc->virtdptr;
|
||||
|
|
@ -1192,18 +1192,19 @@ static void send_rx_ctrl_cmd(struct lio *lio, int start_stop)
|
|||
if (retval == IQ_SEND_FAILED) {
|
||||
netif_info(lio, rx_err, lio->netdev, "Failed to send RX Control message\n");
|
||||
octeon_free_soft_command(oct, sc);
|
||||
return;
|
||||
} else {
|
||||
/* Sleep on a wait queue till the cond flag indicates that the
|
||||
* response arrived or timed-out.
|
||||
*/
|
||||
retval = wait_for_sc_completion_timeout(oct, sc, 0);
|
||||
if (retval)
|
||||
return;
|
||||
return retval;
|
||||
|
||||
oct->props[lio->ifidx].rx_on = start_stop;
|
||||
WRITE_ONCE(sc->caller_is_done, true);
|
||||
}
|
||||
|
||||
return retval;
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -1778,6 +1779,7 @@ static int liquidio_open(struct net_device *netdev)
|
|||
struct octeon_device_priv *oct_priv =
|
||||
(struct octeon_device_priv *)oct->priv;
|
||||
struct napi_struct *napi, *n;
|
||||
int ret = 0;
|
||||
|
||||
if (oct->props[lio->ifidx].napi_enabled == 0) {
|
||||
tasklet_disable(&oct_priv->droq_tasklet);
|
||||
|
|
@ -1813,7 +1815,9 @@ static int liquidio_open(struct net_device *netdev)
|
|||
netif_info(lio, ifup, lio->netdev, "Interface Open, ready for traffic\n");
|
||||
|
||||
/* tell Octeon to start forwarding packets to host */
|
||||
send_rx_ctrl_cmd(lio, 1);
|
||||
ret = send_rx_ctrl_cmd(lio, 1);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* start periodical statistics fetch */
|
||||
INIT_DELAYED_WORK(&lio->stats_wk.work, lio_fetch_stats);
|
||||
|
|
@ -1824,7 +1828,7 @@ static int liquidio_open(struct net_device *netdev)
|
|||
dev_info(&oct->pci_dev->dev, "%s interface is opened\n",
|
||||
netdev->name);
|
||||
|
||||
return 0;
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -1838,6 +1842,7 @@ static int liquidio_stop(struct net_device *netdev)
|
|||
struct octeon_device_priv *oct_priv =
|
||||
(struct octeon_device_priv *)oct->priv;
|
||||
struct napi_struct *napi, *n;
|
||||
int ret = 0;
|
||||
|
||||
ifstate_reset(lio, LIO_IFSTATE_RUNNING);
|
||||
|
||||
|
|
@ -1854,7 +1859,9 @@ static int liquidio_stop(struct net_device *netdev)
|
|||
lio->link_changes++;
|
||||
|
||||
/* Tell Octeon that nic interface is down. */
|
||||
send_rx_ctrl_cmd(lio, 0);
|
||||
ret = send_rx_ctrl_cmd(lio, 0);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (OCTEON_CN23XX_PF(oct)) {
|
||||
if (!oct->msix_on)
|
||||
|
|
@ -1889,7 +1896,7 @@ static int liquidio_stop(struct net_device *netdev)
|
|||
|
||||
dev_info(&oct->pci_dev->dev, "%s interface is stopped\n", netdev->name);
|
||||
|
||||
return 0;
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
|||
|
|
@ -595,7 +595,7 @@ static void octeon_destroy_resources(struct octeon_device *oct)
|
|||
* @lio: per-network private data
|
||||
* @start_stop: whether to start or stop
|
||||
*/
|
||||
static void send_rx_ctrl_cmd(struct lio *lio, int start_stop)
|
||||
static int send_rx_ctrl_cmd(struct lio *lio, int start_stop)
|
||||
{
|
||||
struct octeon_device *oct = (struct octeon_device *)lio->oct_dev;
|
||||
struct octeon_soft_command *sc;
|
||||
|
|
@ -603,11 +603,16 @@ static void send_rx_ctrl_cmd(struct lio *lio, int start_stop)
|
|||
int retval;
|
||||
|
||||
if (oct->props[lio->ifidx].rx_on == start_stop)
|
||||
return;
|
||||
return 0;
|
||||
|
||||
sc = (struct octeon_soft_command *)
|
||||
octeon_alloc_soft_command(oct, OCTNET_CMD_SIZE,
|
||||
16, 0);
|
||||
if (!sc) {
|
||||
netif_info(lio, rx_err, lio->netdev,
|
||||
"Failed to allocate octeon_soft_command struct\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
ncmd = (union octnet_cmd *)sc->virtdptr;
|
||||
|
||||
|
|
@ -635,11 +640,13 @@ static void send_rx_ctrl_cmd(struct lio *lio, int start_stop)
|
|||
*/
|
||||
retval = wait_for_sc_completion_timeout(oct, sc, 0);
|
||||
if (retval)
|
||||
return;
|
||||
return retval;
|
||||
|
||||
oct->props[lio->ifidx].rx_on = start_stop;
|
||||
WRITE_ONCE(sc->caller_is_done, true);
|
||||
}
|
||||
|
||||
return retval;
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -906,6 +913,7 @@ static int liquidio_open(struct net_device *netdev)
|
|||
struct octeon_device_priv *oct_priv =
|
||||
(struct octeon_device_priv *)oct->priv;
|
||||
struct napi_struct *napi, *n;
|
||||
int ret = 0;
|
||||
|
||||
if (!oct->props[lio->ifidx].napi_enabled) {
|
||||
tasklet_disable(&oct_priv->droq_tasklet);
|
||||
|
|
@ -932,11 +940,13 @@ static int liquidio_open(struct net_device *netdev)
|
|||
(LIQUIDIO_NDEV_STATS_POLL_TIME_MS));
|
||||
|
||||
/* tell Octeon to start forwarding packets to host */
|
||||
send_rx_ctrl_cmd(lio, 1);
|
||||
ret = send_rx_ctrl_cmd(lio, 1);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
dev_info(&oct->pci_dev->dev, "%s interface is opened\n", netdev->name);
|
||||
|
||||
return 0;
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -950,9 +960,12 @@ static int liquidio_stop(struct net_device *netdev)
|
|||
struct octeon_device_priv *oct_priv =
|
||||
(struct octeon_device_priv *)oct->priv;
|
||||
struct napi_struct *napi, *n;
|
||||
int ret = 0;
|
||||
|
||||
/* tell Octeon to stop forwarding packets to host */
|
||||
send_rx_ctrl_cmd(lio, 0);
|
||||
ret = send_rx_ctrl_cmd(lio, 0);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
netif_info(lio, ifdown, lio->netdev, "Stopping interface!\n");
|
||||
/* Inform that netif carrier is down */
|
||||
|
|
@ -986,7 +999,7 @@ static int liquidio_stop(struct net_device *netdev)
|
|||
|
||||
dev_info(&oct->pci_dev->dev, "%s interface is stopped\n", netdev->name);
|
||||
|
||||
return 0;
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
|||
|
|
@ -1042,7 +1042,7 @@ void clear_all_filters(struct adapter *adapter)
|
|||
cxgb4_del_filter(dev, f->tid, &f->fs);
|
||||
}
|
||||
|
||||
sb = t4_read_reg(adapter, LE_DB_SRVR_START_INDEX_A);
|
||||
sb = adapter->tids.stid_base;
|
||||
for (i = 0; i < sb; i++) {
|
||||
f = (struct filter_entry *)adapter->tids.tid_tab[i];
|
||||
|
||||
|
|
|
|||
|
|
@ -6484,9 +6484,9 @@ static void cxgb4_ktls_dev_del(struct net_device *netdev,
|
|||
|
||||
adap->uld[CXGB4_ULD_KTLS].tlsdev_ops->tls_dev_del(netdev, tls_ctx,
|
||||
direction);
|
||||
cxgb4_set_ktls_feature(adap, FW_PARAMS_PARAM_DEV_KTLS_HW_DISABLE);
|
||||
|
||||
out_unlock:
|
||||
cxgb4_set_ktls_feature(adap, FW_PARAMS_PARAM_DEV_KTLS_HW_DISABLE);
|
||||
mutex_unlock(&uld_mutex);
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -60,6 +60,7 @@ static int chcr_get_nfrags_to_send(struct sk_buff *skb, u32 start, u32 len)
|
|||
}
|
||||
|
||||
static int chcr_init_tcb_fields(struct chcr_ktls_info *tx_info);
|
||||
static void clear_conn_resources(struct chcr_ktls_info *tx_info);
|
||||
/*
|
||||
* chcr_ktls_save_keys: calculate and save crypto keys.
|
||||
* @tx_info - driver specific tls info.
|
||||
|
|
@ -365,10 +366,14 @@ static void chcr_ktls_dev_del(struct net_device *netdev,
|
|||
chcr_get_ktls_tx_context(tls_ctx);
|
||||
struct chcr_ktls_info *tx_info = tx_ctx->chcr_info;
|
||||
struct ch_ktls_port_stats_debug *port_stats;
|
||||
struct chcr_ktls_uld_ctx *u_ctx;
|
||||
|
||||
if (!tx_info)
|
||||
return;
|
||||
|
||||
u_ctx = tx_info->adap->uld[CXGB4_ULD_KTLS].handle;
|
||||
if (u_ctx && u_ctx->detach)
|
||||
return;
|
||||
/* clear l2t entry */
|
||||
if (tx_info->l2te)
|
||||
cxgb4_l2t_release(tx_info->l2te);
|
||||
|
|
@ -385,6 +390,8 @@ static void chcr_ktls_dev_del(struct net_device *netdev,
|
|||
if (tx_info->tid != -1) {
|
||||
cxgb4_remove_tid(&tx_info->adap->tids, tx_info->tx_chan,
|
||||
tx_info->tid, tx_info->ip_family);
|
||||
|
||||
xa_erase(&u_ctx->tid_list, tx_info->tid);
|
||||
}
|
||||
|
||||
port_stats = &tx_info->adap->ch_ktls_stats.ktls_port[tx_info->port_id];
|
||||
|
|
@ -412,6 +419,7 @@ static int chcr_ktls_dev_add(struct net_device *netdev, struct sock *sk,
|
|||
struct tls_context *tls_ctx = tls_get_ctx(sk);
|
||||
struct ch_ktls_port_stats_debug *port_stats;
|
||||
struct chcr_ktls_ofld_ctx_tx *tx_ctx;
|
||||
struct chcr_ktls_uld_ctx *u_ctx;
|
||||
struct chcr_ktls_info *tx_info;
|
||||
struct dst_entry *dst;
|
||||
struct adapter *adap;
|
||||
|
|
@ -426,6 +434,7 @@ static int chcr_ktls_dev_add(struct net_device *netdev, struct sock *sk,
|
|||
adap = pi->adapter;
|
||||
port_stats = &adap->ch_ktls_stats.ktls_port[pi->port_id];
|
||||
atomic64_inc(&port_stats->ktls_tx_connection_open);
|
||||
u_ctx = adap->uld[CXGB4_ULD_KTLS].handle;
|
||||
|
||||
if (direction == TLS_OFFLOAD_CTX_DIR_RX) {
|
||||
pr_err("not expecting for RX direction\n");
|
||||
|
|
@ -435,6 +444,9 @@ static int chcr_ktls_dev_add(struct net_device *netdev, struct sock *sk,
|
|||
if (tx_ctx->chcr_info)
|
||||
goto out;
|
||||
|
||||
if (u_ctx && u_ctx->detach)
|
||||
goto out;
|
||||
|
||||
tx_info = kvzalloc(sizeof(*tx_info), GFP_KERNEL);
|
||||
if (!tx_info)
|
||||
goto out;
|
||||
|
|
@ -570,6 +582,8 @@ free_tid:
|
|||
cxgb4_remove_tid(&tx_info->adap->tids, tx_info->tx_chan,
|
||||
tx_info->tid, tx_info->ip_family);
|
||||
|
||||
xa_erase(&u_ctx->tid_list, tx_info->tid);
|
||||
|
||||
put_module:
|
||||
/* release module refcount */
|
||||
module_put(THIS_MODULE);
|
||||
|
|
@ -634,8 +648,12 @@ static int chcr_ktls_cpl_act_open_rpl(struct adapter *adap,
|
|||
{
|
||||
const struct cpl_act_open_rpl *p = (void *)input;
|
||||
struct chcr_ktls_info *tx_info = NULL;
|
||||
struct chcr_ktls_ofld_ctx_tx *tx_ctx;
|
||||
struct chcr_ktls_uld_ctx *u_ctx;
|
||||
unsigned int atid, tid, status;
|
||||
struct tls_context *tls_ctx;
|
||||
struct tid_info *t;
|
||||
int ret = 0;
|
||||
|
||||
tid = GET_TID(p);
|
||||
status = AOPEN_STATUS_G(ntohl(p->atid_status));
|
||||
|
|
@ -667,14 +685,29 @@ static int chcr_ktls_cpl_act_open_rpl(struct adapter *adap,
|
|||
if (!status) {
|
||||
tx_info->tid = tid;
|
||||
cxgb4_insert_tid(t, tx_info, tx_info->tid, tx_info->ip_family);
|
||||
/* Adding tid */
|
||||
tls_ctx = tls_get_ctx(tx_info->sk);
|
||||
tx_ctx = chcr_get_ktls_tx_context(tls_ctx);
|
||||
u_ctx = adap->uld[CXGB4_ULD_KTLS].handle;
|
||||
if (u_ctx) {
|
||||
ret = xa_insert_bh(&u_ctx->tid_list, tid, tx_ctx,
|
||||
GFP_NOWAIT);
|
||||
if (ret < 0) {
|
||||
pr_err("%s: Failed to allocate tid XA entry = %d\n",
|
||||
__func__, tx_info->tid);
|
||||
tx_info->open_state = CH_KTLS_OPEN_FAILURE;
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
tx_info->open_state = CH_KTLS_OPEN_SUCCESS;
|
||||
} else {
|
||||
tx_info->open_state = CH_KTLS_OPEN_FAILURE;
|
||||
}
|
||||
out:
|
||||
spin_unlock(&tx_info->lock);
|
||||
|
||||
complete(&tx_info->completion);
|
||||
return 0;
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
@ -2092,6 +2125,8 @@ static void *chcr_ktls_uld_add(const struct cxgb4_lld_info *lldi)
|
|||
goto out;
|
||||
}
|
||||
u_ctx->lldi = *lldi;
|
||||
u_ctx->detach = false;
|
||||
xa_init_flags(&u_ctx->tid_list, XA_FLAGS_LOCK_BH);
|
||||
out:
|
||||
return u_ctx;
|
||||
}
|
||||
|
|
@ -2125,6 +2160,45 @@ static int chcr_ktls_uld_rx_handler(void *handle, const __be64 *rsp,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void clear_conn_resources(struct chcr_ktls_info *tx_info)
|
||||
{
|
||||
/* clear l2t entry */
|
||||
if (tx_info->l2te)
|
||||
cxgb4_l2t_release(tx_info->l2te);
|
||||
|
||||
#if IS_ENABLED(CONFIG_IPV6)
|
||||
/* clear clip entry */
|
||||
if (tx_info->ip_family == AF_INET6)
|
||||
cxgb4_clip_release(tx_info->netdev, (const u32 *)
|
||||
&tx_info->sk->sk_v6_rcv_saddr,
|
||||
1);
|
||||
#endif
|
||||
|
||||
/* clear tid */
|
||||
if (tx_info->tid != -1)
|
||||
cxgb4_remove_tid(&tx_info->adap->tids, tx_info->tx_chan,
|
||||
tx_info->tid, tx_info->ip_family);
|
||||
}
|
||||
|
||||
static void ch_ktls_reset_all_conn(struct chcr_ktls_uld_ctx *u_ctx)
|
||||
{
|
||||
struct ch_ktls_port_stats_debug *port_stats;
|
||||
struct chcr_ktls_ofld_ctx_tx *tx_ctx;
|
||||
struct chcr_ktls_info *tx_info;
|
||||
unsigned long index;
|
||||
|
||||
xa_for_each(&u_ctx->tid_list, index, tx_ctx) {
|
||||
tx_info = tx_ctx->chcr_info;
|
||||
clear_conn_resources(tx_info);
|
||||
port_stats = &tx_info->adap->ch_ktls_stats.ktls_port[tx_info->port_id];
|
||||
atomic64_inc(&port_stats->ktls_tx_connection_close);
|
||||
kvfree(tx_info);
|
||||
tx_ctx->chcr_info = NULL;
|
||||
/* release module refcount */
|
||||
module_put(THIS_MODULE);
|
||||
}
|
||||
}
|
||||
|
||||
static int chcr_ktls_uld_state_change(void *handle, enum cxgb4_state new_state)
|
||||
{
|
||||
struct chcr_ktls_uld_ctx *u_ctx = handle;
|
||||
|
|
@ -2141,7 +2215,10 @@ static int chcr_ktls_uld_state_change(void *handle, enum cxgb4_state new_state)
|
|||
case CXGB4_STATE_DETACH:
|
||||
pr_info("%s: Down\n", pci_name(u_ctx->lldi.pdev));
|
||||
mutex_lock(&dev_mutex);
|
||||
u_ctx->detach = true;
|
||||
list_del(&u_ctx->entry);
|
||||
ch_ktls_reset_all_conn(u_ctx);
|
||||
xa_destroy(&u_ctx->tid_list);
|
||||
mutex_unlock(&dev_mutex);
|
||||
break;
|
||||
default:
|
||||
|
|
@ -2180,6 +2257,7 @@ static void __exit chcr_ktls_exit(void)
|
|||
adap = pci_get_drvdata(u_ctx->lldi.pdev);
|
||||
memset(&adap->ch_ktls_stats, 0, sizeof(adap->ch_ktls_stats));
|
||||
list_del(&u_ctx->entry);
|
||||
xa_destroy(&u_ctx->tid_list);
|
||||
kfree(u_ctx);
|
||||
}
|
||||
mutex_unlock(&dev_mutex);
|
||||
|
|
|
|||
|
|
@ -75,6 +75,8 @@ struct chcr_ktls_ofld_ctx_tx {
|
|||
struct chcr_ktls_uld_ctx {
|
||||
struct list_head entry;
|
||||
struct cxgb4_lld_info lldi;
|
||||
struct xarray tid_list;
|
||||
bool detach;
|
||||
};
|
||||
|
||||
static inline struct chcr_ktls_ofld_ctx_tx *
|
||||
|
|
|
|||
|
|
@ -1564,8 +1564,10 @@ found_ok_skb:
|
|||
cerr = put_cmsg(msg, SOL_TLS, TLS_GET_RECORD_TYPE,
|
||||
sizeof(thdr->type), &thdr->type);
|
||||
|
||||
if (cerr && thdr->type != TLS_RECORD_TYPE_DATA)
|
||||
return -EIO;
|
||||
if (cerr && thdr->type != TLS_RECORD_TYPE_DATA) {
|
||||
copied = -EIO;
|
||||
break;
|
||||
}
|
||||
/* don't send tls header, skip copy */
|
||||
goto skip_copy;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -3277,7 +3277,9 @@ static int fec_enet_init(struct net_device *ndev)
|
|||
return ret;
|
||||
}
|
||||
|
||||
fec_enet_alloc_queue(ndev);
|
||||
ret = fec_enet_alloc_queue(ndev);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
bd_size = (fep->total_tx_ring_size + fep->total_rx_ring_size) * dsize;
|
||||
|
||||
|
|
@ -3285,7 +3287,8 @@ static int fec_enet_init(struct net_device *ndev)
|
|||
cbd_base = dmam_alloc_coherent(&fep->pdev->dev, bd_size, &bd_dma,
|
||||
GFP_KERNEL);
|
||||
if (!cbd_base) {
|
||||
return -ENOMEM;
|
||||
ret = -ENOMEM;
|
||||
goto free_queue_mem;
|
||||
}
|
||||
|
||||
/* Get the Ethernet address */
|
||||
|
|
@ -3363,6 +3366,10 @@ static int fec_enet_init(struct net_device *ndev)
|
|||
fec_enet_update_ethtool_stats(ndev);
|
||||
|
||||
return 0;
|
||||
|
||||
free_queue_mem:
|
||||
fec_enet_free_queue(ndev);
|
||||
return ret;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_OF
|
||||
|
|
|
|||
|
|
@ -548,8 +548,8 @@ static int fmvj18x_get_hwinfo(struct pcmcia_device *link, u_char *node_id)
|
|||
|
||||
base = ioremap(link->resource[2]->start, resource_size(link->resource[2]));
|
||||
if (!base) {
|
||||
pcmcia_release_window(link, link->resource[2]);
|
||||
return -ENOMEM;
|
||||
pcmcia_release_window(link, link->resource[2]);
|
||||
return -1;
|
||||
}
|
||||
|
||||
pcmcia_map_mem_page(link, link->resource[2], 0);
|
||||
|
|
|
|||
|
|
@ -180,7 +180,7 @@ static int gve_napi_poll(struct napi_struct *napi, int budget)
|
|||
/* Double check we have no extra work.
|
||||
* Ensure unmask synchronizes with checking for work.
|
||||
*/
|
||||
dma_rmb();
|
||||
mb();
|
||||
if (block->tx)
|
||||
reschedule |= gve_tx_poll(block, -1);
|
||||
if (block->rx)
|
||||
|
|
@ -220,6 +220,7 @@ static int gve_alloc_notify_blocks(struct gve_priv *priv)
|
|||
int vecs_left = new_num_ntfy_blks % 2;
|
||||
|
||||
priv->num_ntfy_blks = new_num_ntfy_blks;
|
||||
priv->mgmt_msix_idx = priv->num_ntfy_blks;
|
||||
priv->tx_cfg.max_queues = min_t(int, priv->tx_cfg.max_queues,
|
||||
vecs_per_type);
|
||||
priv->rx_cfg.max_queues = min_t(int, priv->rx_cfg.max_queues,
|
||||
|
|
@ -300,20 +301,22 @@ static void gve_free_notify_blocks(struct gve_priv *priv)
|
|||
{
|
||||
int i;
|
||||
|
||||
/* Free the irqs */
|
||||
for (i = 0; i < priv->num_ntfy_blks; i++) {
|
||||
struct gve_notify_block *block = &priv->ntfy_blocks[i];
|
||||
int msix_idx = i;
|
||||
if (priv->msix_vectors) {
|
||||
/* Free the irqs */
|
||||
for (i = 0; i < priv->num_ntfy_blks; i++) {
|
||||
struct gve_notify_block *block = &priv->ntfy_blocks[i];
|
||||
int msix_idx = i;
|
||||
|
||||
irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector,
|
||||
NULL);
|
||||
free_irq(priv->msix_vectors[msix_idx].vector, block);
|
||||
irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector,
|
||||
NULL);
|
||||
free_irq(priv->msix_vectors[msix_idx].vector, block);
|
||||
}
|
||||
free_irq(priv->msix_vectors[priv->mgmt_msix_idx].vector, priv);
|
||||
}
|
||||
dma_free_coherent(&priv->pdev->dev,
|
||||
priv->num_ntfy_blks * sizeof(*priv->ntfy_blocks),
|
||||
priv->ntfy_blocks, priv->ntfy_block_bus);
|
||||
priv->ntfy_blocks = NULL;
|
||||
free_irq(priv->msix_vectors[priv->mgmt_msix_idx].vector, priv);
|
||||
pci_disable_msix(priv->pdev);
|
||||
kvfree(priv->msix_vectors);
|
||||
priv->msix_vectors = NULL;
|
||||
|
|
|
|||
|
|
@ -207,10 +207,12 @@ static int gve_tx_alloc_ring(struct gve_priv *priv, int idx)
|
|||
goto abort_with_info;
|
||||
|
||||
tx->tx_fifo.qpl = gve_assign_tx_qpl(priv);
|
||||
if (!tx->tx_fifo.qpl)
|
||||
goto abort_with_desc;
|
||||
|
||||
/* map Tx FIFO */
|
||||
if (gve_tx_fifo_init(priv, &tx->tx_fifo))
|
||||
goto abort_with_desc;
|
||||
goto abort_with_qpl;
|
||||
|
||||
tx->q_resources =
|
||||
dma_alloc_coherent(hdev,
|
||||
|
|
@ -229,6 +231,8 @@ static int gve_tx_alloc_ring(struct gve_priv *priv, int idx)
|
|||
|
||||
abort_with_fifo:
|
||||
gve_tx_fifo_release(priv, &tx->tx_fifo);
|
||||
abort_with_qpl:
|
||||
gve_unassign_qpl(priv, tx->tx_fifo.qpl->id);
|
||||
abort_with_desc:
|
||||
dma_free_coherent(hdev, bytes, tx->desc, tx->bus);
|
||||
tx->desc = NULL;
|
||||
|
|
@ -478,7 +482,7 @@ netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev)
|
|||
struct gve_tx_ring *tx;
|
||||
int nsegs;
|
||||
|
||||
WARN(skb_get_queue_mapping(skb) > priv->tx_cfg.num_queues,
|
||||
WARN(skb_get_queue_mapping(skb) >= priv->tx_cfg.num_queues,
|
||||
"skb queue index out of range");
|
||||
tx = &priv->tx[skb_get_queue_mapping(skb)];
|
||||
if (unlikely(gve_maybe_stop_tx(tx, skb))) {
|
||||
|
|
|
|||
|
|
@ -792,8 +792,6 @@ static bool hns3_tunnel_csum_bug(struct sk_buff *skb)
|
|||
l4.udp->dest == htons(4790))))
|
||||
return false;
|
||||
|
||||
skb_checksum_help(skb);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
|
|
@ -871,8 +869,7 @@ static int hns3_set_l2l3l4(struct sk_buff *skb, u8 ol4_proto,
|
|||
/* the stack computes the IP header already,
|
||||
* driver calculate l4 checksum when not TSO.
|
||||
*/
|
||||
skb_checksum_help(skb);
|
||||
return 0;
|
||||
return skb_checksum_help(skb);
|
||||
}
|
||||
|
||||
hns3_set_outer_l2l3l4(skb, ol4_proto, ol_type_vlan_len_msec);
|
||||
|
|
@ -917,7 +914,7 @@ static int hns3_set_l2l3l4(struct sk_buff *skb, u8 ol4_proto,
|
|||
break;
|
||||
case IPPROTO_UDP:
|
||||
if (hns3_tunnel_csum_bug(skb))
|
||||
break;
|
||||
return skb_checksum_help(skb);
|
||||
|
||||
hns3_set_field(*type_cs_vlan_tso, HNS3_TXD_L4CS_B, 1);
|
||||
hns3_set_field(*type_cs_vlan_tso, HNS3_TXD_L4T_S,
|
||||
|
|
@ -942,8 +939,7 @@ static int hns3_set_l2l3l4(struct sk_buff *skb, u8 ol4_proto,
|
|||
/* the stack computes the IP header already,
|
||||
* driver calculate l4 checksum when not TSO.
|
||||
*/
|
||||
skb_checksum_help(skb);
|
||||
return 0;
|
||||
return skb_checksum_help(skb);
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
|
@ -4113,12 +4109,6 @@ static int hns3_client_init(struct hnae3_handle *handle)
|
|||
if (ret)
|
||||
goto out_init_phy;
|
||||
|
||||
ret = register_netdev(netdev);
|
||||
if (ret) {
|
||||
dev_err(priv->dev, "probe register netdev fail!\n");
|
||||
goto out_reg_netdev_fail;
|
||||
}
|
||||
|
||||
/* the device can work without cpu rmap, only aRFS needs it */
|
||||
ret = hns3_set_rx_cpu_rmap(netdev);
|
||||
if (ret)
|
||||
|
|
@ -4146,17 +4136,23 @@ static int hns3_client_init(struct hnae3_handle *handle)
|
|||
|
||||
set_bit(HNS3_NIC_STATE_INITED, &priv->state);
|
||||
|
||||
ret = register_netdev(netdev);
|
||||
if (ret) {
|
||||
dev_err(priv->dev, "probe register netdev fail!\n");
|
||||
goto out_reg_netdev_fail;
|
||||
}
|
||||
|
||||
if (netif_msg_drv(handle))
|
||||
hns3_info_show(priv);
|
||||
|
||||
return ret;
|
||||
|
||||
out_reg_netdev_fail:
|
||||
hns3_dbg_uninit(handle);
|
||||
out_client_start:
|
||||
hns3_free_rx_cpu_rmap(netdev);
|
||||
hns3_nic_uninit_irq(priv);
|
||||
out_init_irq_fail:
|
||||
unregister_netdev(netdev);
|
||||
out_reg_netdev_fail:
|
||||
hns3_uninit_phy(netdev);
|
||||
out_init_phy:
|
||||
hns3_uninit_all_ring(priv);
|
||||
|
|
|
|||
|
|
@ -678,7 +678,6 @@ void hclge_mbx_handler(struct hclge_dev *hdev)
|
|||
unsigned int flag;
|
||||
int ret = 0;
|
||||
|
||||
memset(&resp_msg, 0, sizeof(resp_msg));
|
||||
/* handle all the mailbox requests in the queue */
|
||||
while (!hclge_cmd_crq_empty(&hdev->hw)) {
|
||||
if (test_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state)) {
|
||||
|
|
@ -706,6 +705,9 @@ void hclge_mbx_handler(struct hclge_dev *hdev)
|
|||
|
||||
trace_hclge_pf_mbx_get(hdev, req);
|
||||
|
||||
/* clear the resp_msg before processing every mailbox message */
|
||||
memset(&resp_msg, 0, sizeof(resp_msg));
|
||||
|
||||
switch (req->msg.code) {
|
||||
case HCLGE_MBX_MAP_RING_TO_VECTOR:
|
||||
ret = hclge_map_unmap_ring_to_vf_vector(vport, true,
|
||||
|
|
|
|||
|
|
@ -467,12 +467,16 @@ static int ixgbe_set_vf_vlan(struct ixgbe_adapter *adapter, int add, int vid,
|
|||
return err;
|
||||
}
|
||||
|
||||
static s32 ixgbe_set_vf_lpe(struct ixgbe_adapter *adapter, u32 *msgbuf, u32 vf)
|
||||
static int ixgbe_set_vf_lpe(struct ixgbe_adapter *adapter, u32 max_frame, u32 vf)
|
||||
{
|
||||
struct ixgbe_hw *hw = &adapter->hw;
|
||||
int max_frame = msgbuf[1];
|
||||
u32 max_frs;
|
||||
|
||||
if (max_frame < ETH_MIN_MTU || max_frame > IXGBE_MAX_JUMBO_FRAME_SIZE) {
|
||||
e_err(drv, "VF max_frame %d out of range\n", max_frame);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/*
|
||||
* For 82599EB we have to keep all PFs and VFs operating with
|
||||
* the same max_frame value in order to avoid sending an oversize
|
||||
|
|
@ -533,12 +537,6 @@ static s32 ixgbe_set_vf_lpe(struct ixgbe_adapter *adapter, u32 *msgbuf, u32 vf)
|
|||
}
|
||||
}
|
||||
|
||||
/* MTU < 68 is an error and causes problems on some kernels */
|
||||
if (max_frame > IXGBE_MAX_JUMBO_FRAME_SIZE) {
|
||||
e_err(drv, "VF max_frame %d out of range\n", max_frame);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* pull current max frame size from hardware */
|
||||
max_frs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
|
||||
max_frs &= IXGBE_MHADD_MFS_MASK;
|
||||
|
|
@ -1249,7 +1247,7 @@ static int ixgbe_rcv_msg_from_vf(struct ixgbe_adapter *adapter, u32 vf)
|
|||
retval = ixgbe_set_vf_vlan_msg(adapter, msgbuf, vf);
|
||||
break;
|
||||
case IXGBE_VF_SET_LPE:
|
||||
retval = ixgbe_set_vf_lpe(adapter, msgbuf, vf);
|
||||
retval = ixgbe_set_vf_lpe(adapter, msgbuf[1], vf);
|
||||
break;
|
||||
case IXGBE_VF_SET_MACVLAN:
|
||||
retval = ixgbe_set_vf_macvlan_msg(adapter, msgbuf, vf);
|
||||
|
|
|
|||
|
|
@ -154,6 +154,7 @@ static int xrx200_close(struct net_device *net_dev)
|
|||
|
||||
static int xrx200_alloc_skb(struct xrx200_chan *ch)
|
||||
{
|
||||
dma_addr_t mapping;
|
||||
int ret = 0;
|
||||
|
||||
ch->skb[ch->dma.desc] = netdev_alloc_skb_ip_align(ch->priv->net_dev,
|
||||
|
|
@ -163,16 +164,17 @@ static int xrx200_alloc_skb(struct xrx200_chan *ch)
|
|||
goto skip;
|
||||
}
|
||||
|
||||
ch->dma.desc_base[ch->dma.desc].addr = dma_map_single(ch->priv->dev,
|
||||
ch->skb[ch->dma.desc]->data, XRX200_DMA_DATA_LEN,
|
||||
DMA_FROM_DEVICE);
|
||||
if (unlikely(dma_mapping_error(ch->priv->dev,
|
||||
ch->dma.desc_base[ch->dma.desc].addr))) {
|
||||
mapping = dma_map_single(ch->priv->dev, ch->skb[ch->dma.desc]->data,
|
||||
XRX200_DMA_DATA_LEN, DMA_FROM_DEVICE);
|
||||
if (unlikely(dma_mapping_error(ch->priv->dev, mapping))) {
|
||||
dev_kfree_skb_any(ch->skb[ch->dma.desc]);
|
||||
ret = -ENOMEM;
|
||||
goto skip;
|
||||
}
|
||||
|
||||
ch->dma.desc_base[ch->dma.desc].addr = mapping;
|
||||
/* Make sure the address is written before we give it to HW */
|
||||
wmb();
|
||||
skip:
|
||||
ch->dma.desc_base[ch->dma.desc].ctl =
|
||||
LTQ_DMA_OWN | LTQ_DMA_RX_OFFSET(NET_IP_ALIGN) |
|
||||
|
|
@ -196,6 +198,8 @@ static int xrx200_hw_receive(struct xrx200_chan *ch)
|
|||
ch->dma.desc %= LTQ_DESC_NUM;
|
||||
|
||||
if (ret) {
|
||||
ch->skb[ch->dma.desc] = skb;
|
||||
net_dev->stats.rx_dropped++;
|
||||
netdev_err(net_dev, "failed to allocate new rx buffer\n");
|
||||
return ret;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -909,6 +909,14 @@ enum mvpp22_ptp_packet_format {
|
|||
|
||||
#define MVPP2_DESC_DMA_MASK DMA_BIT_MASK(40)
|
||||
|
||||
/* Buffer header info bits */
|
||||
#define MVPP2_B_HDR_INFO_MC_ID_MASK 0xfff
|
||||
#define MVPP2_B_HDR_INFO_MC_ID(info) ((info) & MVPP2_B_HDR_INFO_MC_ID_MASK)
|
||||
#define MVPP2_B_HDR_INFO_LAST_OFFS 12
|
||||
#define MVPP2_B_HDR_INFO_LAST_MASK BIT(12)
|
||||
#define MVPP2_B_HDR_INFO_IS_LAST(info) \
|
||||
(((info) & MVPP2_B_HDR_INFO_LAST_MASK) >> MVPP2_B_HDR_INFO_LAST_OFFS)
|
||||
|
||||
struct mvpp2_tai;
|
||||
|
||||
/* Definitions */
|
||||
|
|
@ -918,6 +926,20 @@ struct mvpp2_rss_table {
|
|||
u32 indir[MVPP22_RSS_TABLE_ENTRIES];
|
||||
};
|
||||
|
||||
struct mvpp2_buff_hdr {
|
||||
__le32 next_phys_addr;
|
||||
__le32 next_dma_addr;
|
||||
__le16 byte_count;
|
||||
__le16 info;
|
||||
__le16 reserved1; /* bm_qset (for future use, BM) */
|
||||
u8 next_phys_addr_high;
|
||||
u8 next_dma_addr_high;
|
||||
__le16 reserved2;
|
||||
__le16 reserved3;
|
||||
__le16 reserved4;
|
||||
__le16 reserved5;
|
||||
};
|
||||
|
||||
/* Shared Packet Processor resources */
|
||||
struct mvpp2 {
|
||||
/* Shared registers' base addresses */
|
||||
|
|
|
|||
|
|
@ -3481,6 +3481,35 @@ mvpp2_run_xdp(struct mvpp2_port *port, struct mvpp2_rx_queue *rxq,
|
|||
return ret;
|
||||
}
|
||||
|
||||
static void mvpp2_buff_hdr_pool_put(struct mvpp2_port *port, struct mvpp2_rx_desc *rx_desc,
|
||||
int pool, u32 rx_status)
|
||||
{
|
||||
phys_addr_t phys_addr, phys_addr_next;
|
||||
dma_addr_t dma_addr, dma_addr_next;
|
||||
struct mvpp2_buff_hdr *buff_hdr;
|
||||
|
||||
phys_addr = mvpp2_rxdesc_dma_addr_get(port, rx_desc);
|
||||
dma_addr = mvpp2_rxdesc_cookie_get(port, rx_desc);
|
||||
|
||||
do {
|
||||
buff_hdr = (struct mvpp2_buff_hdr *)phys_to_virt(phys_addr);
|
||||
|
||||
phys_addr_next = le32_to_cpu(buff_hdr->next_phys_addr);
|
||||
dma_addr_next = le32_to_cpu(buff_hdr->next_dma_addr);
|
||||
|
||||
if (port->priv->hw_version >= MVPP22) {
|
||||
phys_addr_next |= ((u64)buff_hdr->next_phys_addr_high << 32);
|
||||
dma_addr_next |= ((u64)buff_hdr->next_dma_addr_high << 32);
|
||||
}
|
||||
|
||||
mvpp2_bm_pool_put(port, pool, dma_addr, phys_addr);
|
||||
|
||||
phys_addr = phys_addr_next;
|
||||
dma_addr = dma_addr_next;
|
||||
|
||||
} while (!MVPP2_B_HDR_INFO_IS_LAST(le16_to_cpu(buff_hdr->info)));
|
||||
}
|
||||
|
||||
/* Main rx processing */
|
||||
static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,
|
||||
int rx_todo, struct mvpp2_rx_queue *rxq)
|
||||
|
|
@ -3527,14 +3556,6 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,
|
|||
MVPP2_RXD_BM_POOL_ID_OFFS;
|
||||
bm_pool = &port->priv->bm_pools[pool];
|
||||
|
||||
/* In case of an error, release the requested buffer pointer
|
||||
* to the Buffer Manager. This request process is controlled
|
||||
* by the hardware, and the information about the buffer is
|
||||
* comprised by the RX descriptor.
|
||||
*/
|
||||
if (rx_status & MVPP2_RXD_ERR_SUMMARY)
|
||||
goto err_drop_frame;
|
||||
|
||||
if (port->priv->percpu_pools) {
|
||||
pp = port->priv->page_pool[pool];
|
||||
dma_dir = page_pool_get_dma_dir(pp);
|
||||
|
|
@ -3546,6 +3567,18 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,
|
|||
rx_bytes + MVPP2_MH_SIZE,
|
||||
dma_dir);
|
||||
|
||||
/* Buffer header not supported */
|
||||
if (rx_status & MVPP2_RXD_BUF_HDR)
|
||||
goto err_drop_frame;
|
||||
|
||||
/* In case of an error, release the requested buffer pointer
|
||||
* to the Buffer Manager. This request process is controlled
|
||||
* by the hardware, and the information about the buffer is
|
||||
* comprised by the RX descriptor.
|
||||
*/
|
||||
if (rx_status & MVPP2_RXD_ERR_SUMMARY)
|
||||
goto err_drop_frame;
|
||||
|
||||
/* Prefetch header */
|
||||
prefetch(data);
|
||||
|
||||
|
|
@ -3627,7 +3660,10 @@ err_drop_frame:
|
|||
dev->stats.rx_errors++;
|
||||
mvpp2_rx_error(port, rx_desc);
|
||||
/* Return the buffer to the pool */
|
||||
mvpp2_bm_pool_put(port, pool, dma_addr, phys_addr);
|
||||
if (rx_status & MVPP2_RXD_BUF_HDR)
|
||||
mvpp2_buff_hdr_pool_put(port, rx_desc, pool, rx_status);
|
||||
else
|
||||
mvpp2_bm_pool_put(port, pool, dma_addr, phys_addr);
|
||||
}
|
||||
|
||||
rcu_read_unlock();
|
||||
|
|
|
|||
|
|
@ -679,32 +679,53 @@ static int mtk_set_mac_address(struct net_device *dev, void *p)
|
|||
void mtk_stats_update_mac(struct mtk_mac *mac)
|
||||
{
|
||||
struct mtk_hw_stats *hw_stats = mac->hw_stats;
|
||||
unsigned int base = MTK_GDM1_TX_GBCNT;
|
||||
u64 stats;
|
||||
|
||||
base += hw_stats->reg_offset;
|
||||
struct mtk_eth *eth = mac->hw;
|
||||
|
||||
u64_stats_update_begin(&hw_stats->syncp);
|
||||
|
||||
hw_stats->rx_bytes += mtk_r32(mac->hw, base);
|
||||
stats = mtk_r32(mac->hw, base + 0x04);
|
||||
if (stats)
|
||||
hw_stats->rx_bytes += (stats << 32);
|
||||
hw_stats->rx_packets += mtk_r32(mac->hw, base + 0x08);
|
||||
hw_stats->rx_overflow += mtk_r32(mac->hw, base + 0x10);
|
||||
hw_stats->rx_fcs_errors += mtk_r32(mac->hw, base + 0x14);
|
||||
hw_stats->rx_short_errors += mtk_r32(mac->hw, base + 0x18);
|
||||
hw_stats->rx_long_errors += mtk_r32(mac->hw, base + 0x1c);
|
||||
hw_stats->rx_checksum_errors += mtk_r32(mac->hw, base + 0x20);
|
||||
hw_stats->rx_flow_control_packets +=
|
||||
mtk_r32(mac->hw, base + 0x24);
|
||||
hw_stats->tx_skip += mtk_r32(mac->hw, base + 0x28);
|
||||
hw_stats->tx_collisions += mtk_r32(mac->hw, base + 0x2c);
|
||||
hw_stats->tx_bytes += mtk_r32(mac->hw, base + 0x30);
|
||||
stats = mtk_r32(mac->hw, base + 0x34);
|
||||
if (stats)
|
||||
hw_stats->tx_bytes += (stats << 32);
|
||||
hw_stats->tx_packets += mtk_r32(mac->hw, base + 0x38);
|
||||
if (MTK_HAS_CAPS(eth->soc->caps, MTK_SOC_MT7628)) {
|
||||
hw_stats->tx_packets += mtk_r32(mac->hw, MT7628_SDM_TPCNT);
|
||||
hw_stats->tx_bytes += mtk_r32(mac->hw, MT7628_SDM_TBCNT);
|
||||
hw_stats->rx_packets += mtk_r32(mac->hw, MT7628_SDM_RPCNT);
|
||||
hw_stats->rx_bytes += mtk_r32(mac->hw, MT7628_SDM_RBCNT);
|
||||
hw_stats->rx_checksum_errors +=
|
||||
mtk_r32(mac->hw, MT7628_SDM_CS_ERR);
|
||||
} else {
|
||||
unsigned int offs = hw_stats->reg_offset;
|
||||
u64 stats;
|
||||
|
||||
hw_stats->rx_bytes += mtk_r32(mac->hw,
|
||||
MTK_GDM1_RX_GBCNT_L + offs);
|
||||
stats = mtk_r32(mac->hw, MTK_GDM1_RX_GBCNT_H + offs);
|
||||
if (stats)
|
||||
hw_stats->rx_bytes += (stats << 32);
|
||||
hw_stats->rx_packets +=
|
||||
mtk_r32(mac->hw, MTK_GDM1_RX_GPCNT + offs);
|
||||
hw_stats->rx_overflow +=
|
||||
mtk_r32(mac->hw, MTK_GDM1_RX_OERCNT + offs);
|
||||
hw_stats->rx_fcs_errors +=
|
||||
mtk_r32(mac->hw, MTK_GDM1_RX_FERCNT + offs);
|
||||
hw_stats->rx_short_errors +=
|
||||
mtk_r32(mac->hw, MTK_GDM1_RX_SERCNT + offs);
|
||||
hw_stats->rx_long_errors +=
|
||||
mtk_r32(mac->hw, MTK_GDM1_RX_LENCNT + offs);
|
||||
hw_stats->rx_checksum_errors +=
|
||||
mtk_r32(mac->hw, MTK_GDM1_RX_CERCNT + offs);
|
||||
hw_stats->rx_flow_control_packets +=
|
||||
mtk_r32(mac->hw, MTK_GDM1_RX_FCCNT + offs);
|
||||
hw_stats->tx_skip +=
|
||||
mtk_r32(mac->hw, MTK_GDM1_TX_SKIPCNT + offs);
|
||||
hw_stats->tx_collisions +=
|
||||
mtk_r32(mac->hw, MTK_GDM1_TX_COLCNT + offs);
|
||||
hw_stats->tx_bytes +=
|
||||
mtk_r32(mac->hw, MTK_GDM1_TX_GBCNT_L + offs);
|
||||
stats = mtk_r32(mac->hw, MTK_GDM1_TX_GBCNT_H + offs);
|
||||
if (stats)
|
||||
hw_stats->tx_bytes += (stats << 32);
|
||||
hw_stats->tx_packets +=
|
||||
mtk_r32(mac->hw, MTK_GDM1_TX_GPCNT + offs);
|
||||
}
|
||||
|
||||
u64_stats_update_end(&hw_stats->syncp);
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -266,8 +266,21 @@
|
|||
/* QDMA FQ Free Page Buffer Length Register */
|
||||
#define MTK_QDMA_FQ_BLEN 0x1B2C
|
||||
|
||||
/* GMA1 Received Good Byte Count Register */
|
||||
#define MTK_GDM1_TX_GBCNT 0x2400
|
||||
/* GMA1 counter / statics register */
|
||||
#define MTK_GDM1_RX_GBCNT_L 0x2400
|
||||
#define MTK_GDM1_RX_GBCNT_H 0x2404
|
||||
#define MTK_GDM1_RX_GPCNT 0x2408
|
||||
#define MTK_GDM1_RX_OERCNT 0x2410
|
||||
#define MTK_GDM1_RX_FERCNT 0x2414
|
||||
#define MTK_GDM1_RX_SERCNT 0x2418
|
||||
#define MTK_GDM1_RX_LENCNT 0x241c
|
||||
#define MTK_GDM1_RX_CERCNT 0x2420
|
||||
#define MTK_GDM1_RX_FCCNT 0x2424
|
||||
#define MTK_GDM1_TX_SKIPCNT 0x2428
|
||||
#define MTK_GDM1_TX_COLCNT 0x242c
|
||||
#define MTK_GDM1_TX_GBCNT_L 0x2430
|
||||
#define MTK_GDM1_TX_GBCNT_H 0x2434
|
||||
#define MTK_GDM1_TX_GPCNT 0x2438
|
||||
#define MTK_STAT_OFFSET 0x40
|
||||
|
||||
/* QDMA descriptor txd4 */
|
||||
|
|
@ -478,6 +491,13 @@
|
|||
#define MT7628_SDM_MAC_ADRL (MT7628_SDM_OFFSET + 0x0c)
|
||||
#define MT7628_SDM_MAC_ADRH (MT7628_SDM_OFFSET + 0x10)
|
||||
|
||||
/* Counter / stat register */
|
||||
#define MT7628_SDM_TPCNT (MT7628_SDM_OFFSET + 0x100)
|
||||
#define MT7628_SDM_TBCNT (MT7628_SDM_OFFSET + 0x104)
|
||||
#define MT7628_SDM_RPCNT (MT7628_SDM_OFFSET + 0x108)
|
||||
#define MT7628_SDM_RBCNT (MT7628_SDM_OFFSET + 0x10c)
|
||||
#define MT7628_SDM_CS_ERR (MT7628_SDM_OFFSET + 0x110)
|
||||
|
||||
struct mtk_rx_dma {
|
||||
unsigned int rxd1;
|
||||
unsigned int rxd2;
|
||||
|
|
|
|||
|
|
@ -2027,8 +2027,6 @@ static int mlx4_en_set_tunable(struct net_device *dev,
|
|||
return ret;
|
||||
}
|
||||
|
||||
#define MLX4_EEPROM_PAGE_LEN 256
|
||||
|
||||
static int mlx4_en_get_module_info(struct net_device *dev,
|
||||
struct ethtool_modinfo *modinfo)
|
||||
{
|
||||
|
|
@ -2063,7 +2061,7 @@ static int mlx4_en_get_module_info(struct net_device *dev,
|
|||
break;
|
||||
case MLX4_MODULE_ID_SFP:
|
||||
modinfo->type = ETH_MODULE_SFF_8472;
|
||||
modinfo->eeprom_len = MLX4_EEPROM_PAGE_LEN;
|
||||
modinfo->eeprom_len = ETH_MODULE_SFF_8472_LEN;
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
|
|
|
|||
|
|
@ -1973,6 +1973,7 @@ EXPORT_SYMBOL(mlx4_get_roce_gid_from_slave);
|
|||
#define I2C_ADDR_LOW 0x50
|
||||
#define I2C_ADDR_HIGH 0x51
|
||||
#define I2C_PAGE_SIZE 256
|
||||
#define I2C_HIGH_PAGE_SIZE 128
|
||||
|
||||
/* Module Info Data */
|
||||
struct mlx4_cable_info {
|
||||
|
|
@ -2026,6 +2027,88 @@ static inline const char *cable_info_mad_err_str(u16 mad_status)
|
|||
return "Unknown Error";
|
||||
}
|
||||
|
||||
static int mlx4_get_module_id(struct mlx4_dev *dev, u8 port, u8 *module_id)
|
||||
{
|
||||
struct mlx4_cmd_mailbox *inbox, *outbox;
|
||||
struct mlx4_mad_ifc *inmad, *outmad;
|
||||
struct mlx4_cable_info *cable_info;
|
||||
int ret;
|
||||
|
||||
inbox = mlx4_alloc_cmd_mailbox(dev);
|
||||
if (IS_ERR(inbox))
|
||||
return PTR_ERR(inbox);
|
||||
|
||||
outbox = mlx4_alloc_cmd_mailbox(dev);
|
||||
if (IS_ERR(outbox)) {
|
||||
mlx4_free_cmd_mailbox(dev, inbox);
|
||||
return PTR_ERR(outbox);
|
||||
}
|
||||
|
||||
inmad = (struct mlx4_mad_ifc *)(inbox->buf);
|
||||
outmad = (struct mlx4_mad_ifc *)(outbox->buf);
|
||||
|
||||
inmad->method = 0x1; /* Get */
|
||||
inmad->class_version = 0x1;
|
||||
inmad->mgmt_class = 0x1;
|
||||
inmad->base_version = 0x1;
|
||||
inmad->attr_id = cpu_to_be16(0xFF60); /* Module Info */
|
||||
|
||||
cable_info = (struct mlx4_cable_info *)inmad->data;
|
||||
cable_info->dev_mem_address = 0;
|
||||
cable_info->page_num = 0;
|
||||
cable_info->i2c_addr = I2C_ADDR_LOW;
|
||||
cable_info->size = cpu_to_be16(1);
|
||||
|
||||
ret = mlx4_cmd_box(dev, inbox->dma, outbox->dma, port, 3,
|
||||
MLX4_CMD_MAD_IFC, MLX4_CMD_TIME_CLASS_C,
|
||||
MLX4_CMD_NATIVE);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
if (be16_to_cpu(outmad->status)) {
|
||||
/* Mad returned with bad status */
|
||||
ret = be16_to_cpu(outmad->status);
|
||||
mlx4_warn(dev,
|
||||
"MLX4_CMD_MAD_IFC Get Module ID attr(%x) port(%d) i2c_addr(%x) offset(%d) size(%d): Response Mad Status(%x) - %s\n",
|
||||
0xFF60, port, I2C_ADDR_LOW, 0, 1, ret,
|
||||
cable_info_mad_err_str(ret));
|
||||
ret = -ret;
|
||||
goto out;
|
||||
}
|
||||
cable_info = (struct mlx4_cable_info *)outmad->data;
|
||||
*module_id = cable_info->data[0];
|
||||
out:
|
||||
mlx4_free_cmd_mailbox(dev, inbox);
|
||||
mlx4_free_cmd_mailbox(dev, outbox);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void mlx4_sfp_eeprom_params_set(u8 *i2c_addr, u8 *page_num, u16 *offset)
|
||||
{
|
||||
*i2c_addr = I2C_ADDR_LOW;
|
||||
*page_num = 0;
|
||||
|
||||
if (*offset < I2C_PAGE_SIZE)
|
||||
return;
|
||||
|
||||
*i2c_addr = I2C_ADDR_HIGH;
|
||||
*offset -= I2C_PAGE_SIZE;
|
||||
}
|
||||
|
||||
static void mlx4_qsfp_eeprom_params_set(u8 *i2c_addr, u8 *page_num, u16 *offset)
|
||||
{
|
||||
/* Offsets 0-255 belong to page 0.
|
||||
* Offsets 256-639 belong to pages 01, 02, 03.
|
||||
* For example, offset 400 is page 02: 1 + (400 - 256) / 128 = 2
|
||||
*/
|
||||
if (*offset < I2C_PAGE_SIZE)
|
||||
*page_num = 0;
|
||||
else
|
||||
*page_num = 1 + (*offset - I2C_PAGE_SIZE) / I2C_HIGH_PAGE_SIZE;
|
||||
*i2c_addr = I2C_ADDR_LOW;
|
||||
*offset -= *page_num * I2C_HIGH_PAGE_SIZE;
|
||||
}
|
||||
|
||||
/**
|
||||
* mlx4_get_module_info - Read cable module eeprom data
|
||||
* @dev: mlx4_dev.
|
||||
|
|
@ -2045,12 +2128,30 @@ int mlx4_get_module_info(struct mlx4_dev *dev, u8 port,
|
|||
struct mlx4_cmd_mailbox *inbox, *outbox;
|
||||
struct mlx4_mad_ifc *inmad, *outmad;
|
||||
struct mlx4_cable_info *cable_info;
|
||||
u16 i2c_addr;
|
||||
u8 module_id, i2c_addr, page_num;
|
||||
int ret;
|
||||
|
||||
if (size > MODULE_INFO_MAX_READ)
|
||||
size = MODULE_INFO_MAX_READ;
|
||||
|
||||
ret = mlx4_get_module_id(dev, port, &module_id);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
switch (module_id) {
|
||||
case MLX4_MODULE_ID_SFP:
|
||||
mlx4_sfp_eeprom_params_set(&i2c_addr, &page_num, &offset);
|
||||
break;
|
||||
case MLX4_MODULE_ID_QSFP:
|
||||
case MLX4_MODULE_ID_QSFP_PLUS:
|
||||
case MLX4_MODULE_ID_QSFP28:
|
||||
mlx4_qsfp_eeprom_params_set(&i2c_addr, &page_num, &offset);
|
||||
break;
|
||||
default:
|
||||
mlx4_err(dev, "Module ID not recognized: %#x\n", module_id);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
inbox = mlx4_alloc_cmd_mailbox(dev);
|
||||
if (IS_ERR(inbox))
|
||||
return PTR_ERR(inbox);
|
||||
|
|
@ -2076,11 +2177,9 @@ int mlx4_get_module_info(struct mlx4_dev *dev, u8 port,
|
|||
*/
|
||||
size -= offset + size - I2C_PAGE_SIZE;
|
||||
|
||||
i2c_addr = I2C_ADDR_LOW;
|
||||
|
||||
cable_info = (struct mlx4_cable_info *)inmad->data;
|
||||
cable_info->dev_mem_address = cpu_to_be16(offset);
|
||||
cable_info->page_num = 0;
|
||||
cable_info->page_num = page_num;
|
||||
cable_info->i2c_addr = i2c_addr;
|
||||
cable_info->size = cpu_to_be16(size);
|
||||
|
||||
|
|
|
|||
|
|
@ -223,6 +223,8 @@ static void mlx5e_rep_changelowerstate_event(struct net_device *netdev, void *pt
|
|||
rpriv = priv->ppriv;
|
||||
fwd_vport_num = rpriv->rep->vport;
|
||||
lag_dev = netdev_master_upper_dev_get(netdev);
|
||||
if (!lag_dev)
|
||||
return;
|
||||
|
||||
netdev_dbg(netdev, "lag_dev(%s)'s slave vport(%d) is txable(%d)\n",
|
||||
lag_dev->name, fwd_vport_num, net_lag_port_dev_txable(netdev));
|
||||
|
|
|
|||
|
|
@ -643,7 +643,7 @@ bool mlx5e_rep_tc_update_skb(struct mlx5_cqe64 *cqe,
|
|||
}
|
||||
|
||||
if (chain) {
|
||||
tc_skb_ext = skb_ext_add(skb, TC_SKB_EXT);
|
||||
tc_skb_ext = tc_skb_ext_alloc(skb);
|
||||
if (!tc_skb_ext) {
|
||||
WARN_ON(1);
|
||||
return false;
|
||||
|
|
|
|||
|
|
@ -35,6 +35,7 @@
|
|||
#include <linux/ipv6.h>
|
||||
#include <linux/tcp.h>
|
||||
#include <linux/mlx5/fs.h>
|
||||
#include <linux/mlx5/mpfs.h>
|
||||
#include "en.h"
|
||||
#include "lib/mpfs.h"
|
||||
|
||||
|
|
|
|||
|
|
@ -2920,7 +2920,7 @@ static int mlx5e_update_netdev_queues(struct mlx5e_priv *priv)
|
|||
int err;
|
||||
|
||||
old_num_txqs = netdev->real_num_tx_queues;
|
||||
old_ntc = netdev->num_tc;
|
||||
old_ntc = netdev->num_tc ? : 1;
|
||||
|
||||
nch = priv->channels.params.num_channels;
|
||||
ntc = priv->channels.params.num_tc;
|
||||
|
|
@ -5385,6 +5385,11 @@ err_free_netdev:
|
|||
return NULL;
|
||||
}
|
||||
|
||||
static void mlx5e_reset_channels(struct net_device *netdev)
|
||||
{
|
||||
netdev_reset_tc(netdev);
|
||||
}
|
||||
|
||||
int mlx5e_attach_netdev(struct mlx5e_priv *priv)
|
||||
{
|
||||
const bool take_rtnl = priv->netdev->reg_state == NETREG_REGISTERED;
|
||||
|
|
@ -5438,6 +5443,7 @@ err_cleanup_tx:
|
|||
profile->cleanup_tx(priv);
|
||||
|
||||
out:
|
||||
mlx5e_reset_channels(priv->netdev);
|
||||
set_bit(MLX5E_STATE_DESTROYING, &priv->state);
|
||||
cancel_work_sync(&priv->update_stats_work);
|
||||
return err;
|
||||
|
|
@ -5455,6 +5461,7 @@ void mlx5e_detach_netdev(struct mlx5e_priv *priv)
|
|||
|
||||
profile->cleanup_rx(priv);
|
||||
profile->cleanup_tx(priv);
|
||||
mlx5e_reset_channels(priv->netdev);
|
||||
cancel_work_sync(&priv->update_stats_work);
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -4025,8 +4025,12 @@ static int add_vlan_push_action(struct mlx5e_priv *priv,
|
|||
if (err)
|
||||
return err;
|
||||
|
||||
*out_dev = dev_get_by_index_rcu(dev_net(vlan_dev),
|
||||
dev_get_iflink(vlan_dev));
|
||||
rcu_read_lock();
|
||||
*out_dev = dev_get_by_index_rcu(dev_net(vlan_dev), dev_get_iflink(vlan_dev));
|
||||
rcu_read_unlock();
|
||||
if (!*out_dev)
|
||||
return -ENODEV;
|
||||
|
||||
if (is_vlan_dev(*out_dev))
|
||||
err = add_vlan_push_action(priv, attr, out_dev, action);
|
||||
|
||||
|
|
@ -5490,7 +5494,7 @@ bool mlx5e_tc_update_skb(struct mlx5_cqe64 *cqe,
|
|||
}
|
||||
|
||||
if (chain) {
|
||||
tc_skb_ext = skb_ext_add(skb, TC_SKB_EXT);
|
||||
tc_skb_ext = tc_skb_ext_alloc(skb);
|
||||
if (WARN_ON(!tc_skb_ext))
|
||||
return false;
|
||||
|
||||
|
|
|
|||
|
|
@ -35,6 +35,7 @@
|
|||
#include <linux/mlx5/mlx5_ifc.h>
|
||||
#include <linux/mlx5/vport.h>
|
||||
#include <linux/mlx5/fs.h>
|
||||
#include <linux/mlx5/mpfs.h>
|
||||
#include "esw/acl/lgcy.h"
|
||||
#include "mlx5_core.h"
|
||||
#include "lib/eq.h"
|
||||
|
|
|
|||
|
|
@ -76,10 +76,11 @@ mlx5_eswitch_termtbl_create(struct mlx5_core_dev *dev,
|
|||
/* As this is the terminating action then the termination table is the
|
||||
* same prio as the slow path
|
||||
*/
|
||||
ft_attr.flags = MLX5_FLOW_TABLE_TERMINATION |
|
||||
ft_attr.flags = MLX5_FLOW_TABLE_TERMINATION | MLX5_FLOW_TABLE_UNMANAGED |
|
||||
MLX5_FLOW_TABLE_TUNNEL_EN_REFORMAT;
|
||||
ft_attr.prio = FDB_SLOW_PATH;
|
||||
ft_attr.prio = FDB_TC_OFFLOAD;
|
||||
ft_attr.max_fte = 1;
|
||||
ft_attr.level = 1;
|
||||
ft_attr.autogroup.max_num_groups = 1;
|
||||
tt->termtbl = mlx5_create_auto_grouped_flow_table(root_ns, &ft_attr);
|
||||
if (IS_ERR(tt->termtbl)) {
|
||||
|
|
@ -171,19 +172,6 @@ mlx5_eswitch_termtbl_put(struct mlx5_eswitch *esw,
|
|||
}
|
||||
}
|
||||
|
||||
static bool mlx5_eswitch_termtbl_is_encap_reformat(struct mlx5_pkt_reformat *rt)
|
||||
{
|
||||
switch (rt->reformat_type) {
|
||||
case MLX5_REFORMAT_TYPE_L2_TO_VXLAN:
|
||||
case MLX5_REFORMAT_TYPE_L2_TO_NVGRE:
|
||||
case MLX5_REFORMAT_TYPE_L2_TO_L2_TUNNEL:
|
||||
case MLX5_REFORMAT_TYPE_L2_TO_L3_TUNNEL:
|
||||
return true;
|
||||
default:
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
static void
|
||||
mlx5_eswitch_termtbl_actions_move(struct mlx5_flow_act *src,
|
||||
struct mlx5_flow_act *dst)
|
||||
|
|
@ -201,14 +189,6 @@ mlx5_eswitch_termtbl_actions_move(struct mlx5_flow_act *src,
|
|||
memset(&src->vlan[1], 0, sizeof(src->vlan[1]));
|
||||
}
|
||||
}
|
||||
|
||||
if (src->action & MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT &&
|
||||
mlx5_eswitch_termtbl_is_encap_reformat(src->pkt_reformat)) {
|
||||
src->action &= ~MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT;
|
||||
dst->action |= MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT;
|
||||
dst->pkt_reformat = src->pkt_reformat;
|
||||
src->pkt_reformat = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
static bool mlx5_eswitch_offload_is_uplink_port(const struct mlx5_eswitch *esw,
|
||||
|
|
@ -237,6 +217,7 @@ mlx5_eswitch_termtbl_required(struct mlx5_eswitch *esw,
|
|||
int i;
|
||||
|
||||
if (!MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, termination_table) ||
|
||||
!MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, ignore_flow_level) ||
|
||||
attr->flags & MLX5_ESW_ATTR_FLAG_SLOW_PATH ||
|
||||
!mlx5_eswitch_offload_is_uplink_port(esw, spec))
|
||||
return false;
|
||||
|
|
@ -278,6 +259,14 @@ mlx5_eswitch_add_termtbl_rule(struct mlx5_eswitch *esw,
|
|||
if (dest[i].type != MLX5_FLOW_DESTINATION_TYPE_VPORT)
|
||||
continue;
|
||||
|
||||
if (attr->dests[num_vport_dests].flags & MLX5_ESW_DEST_ENCAP) {
|
||||
term_tbl_act.action |= MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT;
|
||||
term_tbl_act.pkt_reformat = attr->dests[num_vport_dests].pkt_reformat;
|
||||
} else {
|
||||
term_tbl_act.action &= ~MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT;
|
||||
term_tbl_act.pkt_reformat = NULL;
|
||||
}
|
||||
|
||||
/* get the terminating table for the action list */
|
||||
tt = mlx5_eswitch_termtbl_get_create(esw, &term_tbl_act,
|
||||
&dest[i], attr);
|
||||
|
|
@ -299,6 +288,9 @@ mlx5_eswitch_add_termtbl_rule(struct mlx5_eswitch *esw,
|
|||
goto revert_changes;
|
||||
|
||||
/* create the FTE */
|
||||
flow_act->action &= ~MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT;
|
||||
flow_act->pkt_reformat = NULL;
|
||||
flow_act->flags |= FLOW_ACT_IGNORE_FLOW_LEVEL;
|
||||
rule = mlx5_add_flow_rules(fdb, spec, flow_act, dest, num_dest);
|
||||
if (IS_ERR(rule))
|
||||
goto revert_changes;
|
||||
|
|
|
|||
|
|
@ -307,6 +307,11 @@ int mlx5_lag_mp_init(struct mlx5_lag *ldev)
|
|||
struct lag_mp *mp = &ldev->lag_mp;
|
||||
int err;
|
||||
|
||||
/* always clear mfi, as it might become stale when a route delete event
|
||||
* has been missed
|
||||
*/
|
||||
mp->mfi = NULL;
|
||||
|
||||
if (mp->fib_nb.notifier_call)
|
||||
return 0;
|
||||
|
||||
|
|
@ -335,4 +340,5 @@ void mlx5_lag_mp_cleanup(struct mlx5_lag *ldev)
|
|||
unregister_fib_notifier(&init_net, &mp->fib_nb);
|
||||
destroy_workqueue(mp->wq);
|
||||
mp->fib_nb.notifier_call = NULL;
|
||||
mp->mfi = NULL;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -33,6 +33,7 @@
|
|||
#include <linux/etherdevice.h>
|
||||
#include <linux/mlx5/driver.h>
|
||||
#include <linux/mlx5/mlx5_ifc.h>
|
||||
#include <linux/mlx5/mpfs.h>
|
||||
#include <linux/mlx5/eswitch.h>
|
||||
#include "mlx5_core.h"
|
||||
#include "lib/mpfs.h"
|
||||
|
|
@ -175,6 +176,7 @@ out:
|
|||
mutex_unlock(&mpfs->lock);
|
||||
return err;
|
||||
}
|
||||
EXPORT_SYMBOL(mlx5_mpfs_add_mac);
|
||||
|
||||
int mlx5_mpfs_del_mac(struct mlx5_core_dev *dev, u8 *mac)
|
||||
{
|
||||
|
|
@ -206,3 +208,4 @@ unlock:
|
|||
mutex_unlock(&mpfs->lock);
|
||||
return err;
|
||||
}
|
||||
EXPORT_SYMBOL(mlx5_mpfs_del_mac);
|
||||
|
|
|
|||
|
|
@ -84,12 +84,9 @@ struct l2addr_node {
|
|||
#ifdef CONFIG_MLX5_MPFS
|
||||
int mlx5_mpfs_init(struct mlx5_core_dev *dev);
|
||||
void mlx5_mpfs_cleanup(struct mlx5_core_dev *dev);
|
||||
int mlx5_mpfs_add_mac(struct mlx5_core_dev *dev, u8 *mac);
|
||||
int mlx5_mpfs_del_mac(struct mlx5_core_dev *dev, u8 *mac);
|
||||
#else /* #ifndef CONFIG_MLX5_MPFS */
|
||||
static inline int mlx5_mpfs_init(struct mlx5_core_dev *dev) { return 0; }
|
||||
static inline void mlx5_mpfs_cleanup(struct mlx5_core_dev *dev) {}
|
||||
static inline int mlx5_mpfs_add_mac(struct mlx5_core_dev *dev, u8 *mac) { return 0; }
|
||||
static inline int mlx5_mpfs_del_mac(struct mlx5_core_dev *dev, u8 *mac) { return 0; }
|
||||
#endif
|
||||
|
||||
#endif
|
||||
|
|
|
|||
|
|
@ -1052,7 +1052,6 @@ static void stmmac_check_pcs_mode(struct stmmac_priv *priv)
|
|||
*/
|
||||
static int stmmac_init_phy(struct net_device *dev)
|
||||
{
|
||||
struct ethtool_wolinfo wol = { .cmd = ETHTOOL_GWOL };
|
||||
struct stmmac_priv *priv = netdev_priv(dev);
|
||||
struct device_node *node;
|
||||
int ret;
|
||||
|
|
@ -1078,8 +1077,12 @@ static int stmmac_init_phy(struct net_device *dev)
|
|||
ret = phylink_connect_phy(priv->phylink, phydev);
|
||||
}
|
||||
|
||||
phylink_ethtool_get_wol(priv->phylink, &wol);
|
||||
device_set_wakeup_capable(priv->device, !!wol.supported);
|
||||
if (!priv->plat->pmt) {
|
||||
struct ethtool_wolinfo wol = { .cmd = ETHTOOL_GWOL };
|
||||
|
||||
phylink_ethtool_get_wol(priv->phylink, &wol);
|
||||
device_set_wakeup_capable(priv->device, !!wol.supported);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1350,8 +1350,8 @@ int netcp_txpipe_open(struct netcp_tx_pipe *tx_pipe)
|
|||
tx_pipe->dma_queue = knav_queue_open(name, tx_pipe->dma_queue_id,
|
||||
KNAV_QUEUE_SHARED);
|
||||
if (IS_ERR(tx_pipe->dma_queue)) {
|
||||
dev_err(dev, "Could not open DMA queue for channel \"%s\": %d\n",
|
||||
name, ret);
|
||||
dev_err(dev, "Could not open DMA queue for channel \"%s\": %pe\n",
|
||||
name, tx_pipe->dma_queue);
|
||||
ret = PTR_ERR(tx_pipe->dma_queue);
|
||||
goto err;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -56,6 +56,7 @@ enum ipa_flag {
|
|||
* @mem_virt: Virtual address of IPA-local memory space
|
||||
* @mem_offset: Offset from @mem_virt used for access to IPA memory
|
||||
* @mem_size: Total size (bytes) of memory at @mem_virt
|
||||
* @mem_count: Number of entries in the mem array
|
||||
* @mem: Array of IPA-local memory region descriptors
|
||||
* @imem_iova: I/O virtual address of IPA region in IMEM
|
||||
* @imem_size; Size of IMEM region
|
||||
|
|
@ -102,6 +103,7 @@ struct ipa {
|
|||
void *mem_virt;
|
||||
u32 mem_offset;
|
||||
u32 mem_size;
|
||||
u32 mem_count;
|
||||
const struct ipa_mem *mem;
|
||||
|
||||
unsigned long imem_iova;
|
||||
|
|
|
|||
|
|
@ -181,7 +181,7 @@ int ipa_mem_config(struct ipa *ipa)
|
|||
* for the region, write "canary" values in the space prior to
|
||||
* the region's base address.
|
||||
*/
|
||||
for (mem_id = 0; mem_id < IPA_MEM_COUNT; mem_id++) {
|
||||
for (mem_id = 0; mem_id < ipa->mem_count; mem_id++) {
|
||||
const struct ipa_mem *mem = &ipa->mem[mem_id];
|
||||
u16 canary_count;
|
||||
__le32 *canary;
|
||||
|
|
@ -488,6 +488,7 @@ int ipa_mem_init(struct ipa *ipa, const struct ipa_mem_data *mem_data)
|
|||
ipa->mem_size = resource_size(res);
|
||||
|
||||
/* The ipa->mem[] array is indexed by enum ipa_mem_id values */
|
||||
ipa->mem_count = mem_data->local_count;
|
||||
ipa->mem = mem_data->local;
|
||||
|
||||
ret = ipa_imem_init(ipa, mem_data->imem_addr, mem_data->imem_size);
|
||||
|
|
|
|||
|
|
@ -71,7 +71,6 @@ static int octeon_mdiobus_probe(struct platform_device *pdev)
|
|||
|
||||
return 0;
|
||||
fail_register:
|
||||
mdiobus_free(bus->mii_bus);
|
||||
smi_en.u64 = 0;
|
||||
oct_mdio_writeq(smi_en.u64, bus->register_base + SMI_EN);
|
||||
return err;
|
||||
|
|
@ -85,7 +84,6 @@ static int octeon_mdiobus_remove(struct platform_device *pdev)
|
|||
bus = platform_get_drvdata(pdev);
|
||||
|
||||
mdiobus_unregister(bus->mii_bus);
|
||||
mdiobus_free(bus->mii_bus);
|
||||
smi_en.u64 = 0;
|
||||
oct_mdio_writeq(smi_en.u64, bus->register_base + SMI_EN);
|
||||
return 0;
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue