Cleans up the sparse warning:
warning: dubious: x | !y
Since we do want a bitwise OR here, don't use a logical (true/false)
value. Probably is not a real issue but it cleans up the warning.
Signed-off-by: John Youn <johnyoun@synopsys.com>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
u2sel and u2pel should be __le16. Doesn't fix any issue.
Found with sparse.
Signed-off-by: John Youn <johnyoun@synopsys.com>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
The wIndex passed in here is CPU endianness, but the function expects
little endian.
Found with sparse.
Signed-off-by: John Youn <johnyoun@synopsys.com>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Just like we did for endpoint commands, let's have a
single trace output for the command and its
status. This will improve trace readability
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Just like we did for endpoint commands, let's use a
single return point for generic commands as
well. This aids readability.
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Instead of printing command's status with a separate
trace printout, let's print it within a single call.
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
instead of having infinite loop and always checking
timeout value as a break condition, we can just
decrement timeout inside while's condition.
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
I really thought this would be useful, but as it
turns out, it creates more problems than fixes. The
amount of times we had to fix this because some
other commit shuffled things around and ended up
regressing this tiny little string manupulation...
Might as well remove it, since it has a negligible
added benefit.
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Improve trb tracing by showing trb flags, interrupts
trb type.
trb flags:
- h - hardware owner of descriptor
- l - last TRB
- c - chain buffers
- s - continue on short packet
interrupt flags:
- s - interrupt on short packet
- c - interrupt on complete
Capital letter means that bit is set, while
lowercase letter means bit is cleared.
Signed-off-by: Janusz Dziedzic <januszx.dziedzic@linux.intel.com>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
This will allow us to process several endpoints at a
time by making sure that we lock only shared
resources.
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Allow for dwc3-pci to reach D3 and enable pm_runtime
by providing dummy PM hooks. Without them, PCI
subsystem won't put device to D3.
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
this patch implements the most basic pm_runtime
support for dwc3. Whenever USB cable is dettached,
then we will allow core to runtime_suspend.
Runtime suspending will involve completely tearing
down event buffers and require a full soft-reset of
the IP.
Note that a further optimization could be
implemented once we decide to support hibernation,
which is to allow runtime_suspend with cable
connected when bus is in U3. That's subject to a
separate patch, however.
Tested-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
when we call dwc3_gadget_giveback(), we end up
releasing our controller's lock. Another thread
could get scheduled and disable the endpoint,
subsequently setting dep->endpoint.desc to NULL.
In that case, we would end up dereferencing a NULL
pointer which would result in a Kernel Oops. Let's
avoid the problem by simply returning early if we
have a NULL descriptor.
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
commit f3af36511e ("usb: dwc3: gadget: always
enable IOC on bulk/interrupt transfers") ended up
regressing Isochronous endpoints by clearing
DWC3_EP_BUSY flag too early, which resulted in
choppy audio playback over USB.
Fix that by partially reverting original commit and
making sure that we check for isochronous endpoints.
Fixes: f3af36511e ("usb: dwc3: gadget: always enable IOC
on bulk/interrupt transfers")
Cc: <stable@vger.kernel.org>
Signed-off-by: Konrad Leszczynski <konrad.leszczynski@intel.com>
Signed-off-by: Rafal Redzimski <rafal.f.redzimski@intel.com>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
As a micro-power optimization, let's only resume the
USB2 PHY if we're working on <=HIGHSPEED. If we're
gonna work on SUPERSPEED or SUPERSPEED+, there's no
point in resuming the USB2 PHY.
Fixes: 2b0f11df84 ("usb: dwc3: gadget: clear SUSPHY bit before ep cmds")
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
by holding gadget's IRQ number in dwc->irq_gadget,
it'll be simpler to free_irq() and disable the IRQ
in case an IRQ fires while we are runtime suspended.
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
now that we have re-factored dwc3_core_init() and
dwc3_core_exit() we can use them for suspend/resume
operations.
This will help us avoid some common mistakes when
patching code when we have duplicated pieces of code
doing the same thing.
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
The idea of this patch is for dwc3_core_init() to
abstract all the details about how to initialize
dwc3 and dwc3_core_exit() to do the same for
teardown.
With this, we can simplify suspend/resume operations
by a large margin and always know that we're going
to start dwc3 from a known starting point.
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
this patch is in preparation for some further
re-factoring in dwc3 initialization. No functional
changes.
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
By adding a pointer to endpoint registers' base
address, we can avoid using our controller-wide
struct dwc3 pointer for everything. At some point
this will allow us to have per-endpoint locks which
will, in turn, let us queue requests to separate
endpoints in parallel.
Because of this change our debugfs interface and io
accessors need to be changed accordingly.
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
In all call sites of dwc3_send_gadget_ep_cmd() we
already had a valid dep pointer, so instead of
passing dwc and dep->number, which would be used to
fetch the same pointer we already had, just pass dep
directly.
In other words, we're changing:
struct dwc3_ep *dep = dwc[dep->number];
to just passing struct dwc3_ep *dep as argument.
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Instead of using burst size to configure NUMP, we
should be using RxFIFO Size instead. DWC3 is smart
enough to know that it shouldn't burst in case burst
size is 0.
Reported-by: John Youn <johnyoun@synopsys.com>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
To aid code readability, we're gonna split
__dwc3_gadget_kick_transfer() into its constituent
parts: scatter gather and linear buffers.
That way, it's easier to follow the code and focus
debug effort when one or the other fails.
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Instead of returning -EINVAL when someone calls
__dwc3_gadget_wakeup() in speeds > highspeed, let's
return 0. There are no problems for the driver for
calling it in superspeed as we cleanly just return.
This avoids an annoying WARN_ONCE() always
triggering during superspeed enumeration with LPM
enabled.
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
When we send an endpoint command, we want that to
complete as soon as possible, so let's remove the
unnecessary udelay(1) call.
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
sg_is_last() and list_is_last() will encode the
required information for the driver to make
decisions WRT CHN and LST bits.
While at that, also replace '1' with 'true' for
consistency.
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
as it turns out, we don't need the extra 'start_new'
argument as that can be inferred from DWC3_EP_BUSY
flag.
Because of that, we can simplify
__dwc3_gadget_kick_transfer() by quite a bit, even
allowing us to prepare more TRBs unconditionally.
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
If we're updating transfers, we can also prepare as
many TRBs as we can fit in the ring. Let's start
doing that.
This patch 'solves' a limitation of how many TRBs we
can prepare when we're getting close the end of the
ring. Instead driver to prepare only up to end of
the ring, we check if we have space to wrap around
the ring properly.
Note that this only happens when our enqueue and
dequeue pointers are equal (which is the case for
bulk endpoints after an XferComplete event).
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Instead of trying hard to stay connected to the
host, it's best (and far easier) to disconnect from
the host already.
Anything relying on KEEP_CONNECT will just have that
ignored, but we don't have proper hibernation
implementation yet, so there are no regressions.
In any case, hibernation is only useful for runtime
PM, not system sleep.
While at that, also remove dwc3.dcfg which has been
rendered unnecessary.
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
we will be re-using it for suspend/resume, so
instead of duplicating code, let's just re-factor
the functions so they can be re-used.
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
The PLX USB2380 is a PCIe version of the NET2280 and behaves more like the
USB338x but without the USB3.0 superspeed support.
This was tested with g_ether, g_serial, g_mass_storage on a Gateworks
Ventana GW2383.
Cc: Justin DeFields <justindefields@gmail.com>
Signed-off-by: Tim Harvey <tharvey@gateworks.com>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
With a default size of 16kiB and with maximum of 32
buffers, we can transfer up to 512kiB, however Linux
can transfer up to 1MiB in a single mass storage
block transfer to USB3 storage devices.
Because of this, 1MiB block transfers end up being
slower than 512kiB block transfers. Let's increase
maximum number of storage buffers to a ridiculous
amount (256) so that anybody wanting to test maximum
achievable throughput can do so.
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
valid range for storage buffers is encoded in
Kconfig already. Instead of checking again, let's
drop fsg_num_buffers_validate() altogether.
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
As per commit:
b7fa30c9cc ("sched/fair: Fix post_init_entity_util_avg() serialization")
> the code generated from update_cfs_rq_load_avg():
>
> if (atomic_long_read(&cfs_rq->removed_load_avg)) {
> s64 r = atomic_long_xchg(&cfs_rq->removed_load_avg, 0);
> sa->load_avg = max_t(long, sa->load_avg - r, 0);
> sa->load_sum = max_t(s64, sa->load_sum - r * LOAD_AVG_MAX, 0);
> removed_load = 1;
> }
>
> turns into:
>
> ffffffff81087064: 49 8b 85 98 00 00 00 mov 0x98(%r13),%rax
> ffffffff8108706b: 48 85 c0 test %rax,%rax
> ffffffff8108706e: 74 40 je ffffffff810870b0 <update_blocked_averages+0xc0>
> ffffffff81087070: 4c 89 f8 mov %r15,%rax
> ffffffff81087073: 49 87 85 98 00 00 00 xchg %rax,0x98(%r13)
> ffffffff8108707a: 49 29 45 70 sub %rax,0x70(%r13)
> ffffffff8108707e: 4c 89 f9 mov %r15,%rcx
> ffffffff81087081: bb 01 00 00 00 mov $0x1,%ebx
> ffffffff81087086: 49 83 7d 70 00 cmpq $0x0,0x70(%r13)
> ffffffff8108708b: 49 0f 49 4d 70 cmovns 0x70(%r13),%rcx
>
> Which you'll note ends up with sa->load_avg -= r in memory at
> ffffffff8108707a.
So I _should_ have looked at other unserialized users of ->load_avg,
but alas. Luckily nikbor reported a similar /0 from task_h_load() which
instantly triggered recollection of this here problem.
Aside from the intermediate value hitting memory and causing problems,
there's another problem: the underflow detection relies on the signed
bit. This reduces the effective width of the variables, IOW its
effectively the same as having these variables be of signed type.
This patch changes to a different means of unsigned underflow
detection to not rely on the signed bit. This allows the variables to
use the 'full' unsigned range. And it does so with explicit LOAD -
STORE to ensure any intermediate value will never be visible in
memory, allowing these unserialized loads.
Note: GCC generates crap code for this, might warrant a look later.
Note2: I say 'full' above, if we end up at U*_MAX we'll still explode;
maybe we should do clamping on add too.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Yuyang Du <yuyang.du@intel.com>
Cc: bsegall@google.com
Cc: kernel@kyup.com
Cc: morten.rasmussen@arm.com
Cc: pjt@google.com
Cc: steve.muckle@linaro.org
Fixes: 9d89c257df ("sched/fair: Rewrite runnable load and utilization average tracking")
Link: http://lkml.kernel.org/r/20160617091948.GJ30927@twins.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This patchs adds basic support for the bytewerk.org candleLight interface,
a open hardware (CERN OHL) USB CAN adapter.
Signed-off-by: Hubert Denkmair <hubert@denkmair.de>
Signed-off-by: Maximilian Schneider <max@schneidersoft.net>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
At high bus load it could happen that "at91_poll()" enters with all RX
message boxes filled up. If then at the end the "quota" is exceeded as
well, "rx_next" will not be reset to the first RX mailbox and hence the
interrupts remain disabled.
Signed-off-by: Wolfgang Grandegger <wg@grandegger.com>
Tested-by: Amr Bekhit <amrbekhit@gmail.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
When testing CAN write floods on Altera's CycloneV, the first 2 bytes
are sometimes 0x00, 0x00 or corrupted instead of the values sent. Also
observed bytes 4 & 5 were corrupted in some cases.
The D_CAN Data registers are 32 bits and changing from 16 bit writes to
32 bit writes fixes the problem.
Testing performed on Altera CycloneV (D_CAN). Requesting tests on other
C_CAN & D_CAN platforms.
Reported-by: Richard Andrysek <richard.andrysek@gomtec.de>
Signed-off-by: Thor Thayer <tthayer@opensource.altera.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Yuval Mintz says:
====================
qed*: Fixes series
This series contains several small fixes to driver behavior
[4th patch is the only one containing a 'fatal' fix, but the error
is only theoretical for qede; if would require another protocol
driver yet unsubmitted to reach it].
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
The 'MODULE_FIBER' value replaced several other FIBER values
in newer management firmware images, so existing code would
fail to properly reflect its mode.
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Driver has 2 sets of entries for handling ramrod configurations
toward firmware - a regular pre-allocated set of entires and a
possible 'unlimited' list of additional pending entries.
In most scenarios the 'unlimited' list would not be used, but
when it does the handling of the ramrod completion doesn't
properly handle the release of the entry.
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Several user APIs can cause driver to perform an inner-reload.
Currently, doing this would cause the HW/FW statistics of the
adapter to reset, which isn't the expected behavior [statistics
should only reset on explicit unloads].
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Internal loopback in driver is based on two things - first
is the willingness of transmitter to use it [in case of VFs,
this can be forced based on VEPA/VEB] and secondly on another
vport classification configuration which should match the
packet's destination.
Current code allows non-linux VFs to configure a 'promisc'
mode on Tx, meaning all traffic sent by VF would be loopbacked
internally by firmware; This isn't considered a valid mode and
as such should be prevented by PF.
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When no vlan filter is configured, firmware has a configurable
default on whether to pass only untagged packets or all packets
regardless of their tagging. Driver currently doesn't set this
field in the necessary ramrod, causing the default to always be
'receive all'.
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Pull UDF fixes and a reiserfs fix from Jan Kara:
"A couple of udf fixes (most notably a bug in parsing UDF partitions
which led to inability to mount recent Windows installation media) and
a reiserfs fix for handling kstrdup failure"
* 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs:
reiserfs: check kstrdup failure
udf: Use correct partition reference number for metadata
udf: Use IS_ERR when loading metadata mirror file entry
udf: Don't BUG on missing metadata partition descriptor
This fixes include:
- at_xdmac fixes for residue and other stuff
- update MAINTAINERS for dma dt bindings
- mv_xor fix for incorrect offset
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJXZsgqAAoJEHwUBw8lI4NHQFsQAJ6lNzg4v7Cxj7rc226DwR8v
ioSc5DHeEz6j7ncEV4kkrLsOrDR97fyH5szfankkOeIp+wH9txlS8YqZlruongC1
R1Jukg9Tw8vxkOrID6o3gF55VOfhjtUF/N2NvA4EbdWKSHbpFa638RMsVaS2HUwv
mUugUTFF9eDZZ68WAnK2ouHekgCAWJcSK7fDKLlJguLABz58Botp7fiYXBr6NVsq
1wJsRT5qHkRHRl9Bt1lBfBKVxG9vEKQVE1PIpW9yB4S/tuGjC/qiGTwZK8Uk/zlw
1QMvOAzl53a8eUvoM5CvghikCRi72pXvL/dfjr6PdaL22A8+/0+XAUwnBceqEnQ4
O/9aO6z/4X3A4GXA6hmJzeIdDdbeMxPuc32z3eZ+Eonw+ruTJSkvzfXKC0tmIQeA
eGI74nmFg/LD1TNwFAxNDDe1lo8esPzuFzNA4BmMtjzgJ7gxN21mUR82ZcuL8Nv0
CaNbtOxMsnzGkyuJrV6VudaAq1vCQgLjezEHG2klndjuybEtHgUGgIKS8y+qOSnT
oZELpgfV93OZsG2PEWV/3+CQdmdF1MWN9qis6UdrJKnJezJcq5KM1XyA54fXowkX
JrQs7y3/Hp5HQ9ypfd9pyI/VatBAXiqGU7qaihyf7WuJbQZXacR4mp/ommEckMD2
1Po7a0KW0rR1TC1X+H3Z
=quSJ
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-fix-4.7-rc4' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine fixes from Vinod Koul:
"Some fixes has piled up, so time to send them upstream.
These fixes include:
- at_xdmac fixes for residue and other stuff
- update MAINTAINERS for dma dt bindings
- mv_xor fix for incorrect offset"
* tag 'dmaengine-fix-4.7-rc4' of git://git.infradead.org/users/vkoul/slave-dma:
dmaengine: mv_xor: Fix incorrect offset in dma_map_page()
dmaengine: at_xdmac: double FIFO flush needed to compute residue
dmaengine: at_xdmac: fix residue corruption
dmaengine: at_xdmac: align descriptors on 64 bits
MAINTAINERS: Add file patterns for dma device tree bindings