If a buffer is backed by dmabuf, wait on its reservation object's exclusive
fence before flipping.
v2: First commit
v3: Remove object_name_lock acquire
v4: Move wait ahead of mark_page_flip_active
Use crtc->primary->fb to get GEM object instead of pending_flip_obj
use_mmio_flip() return true when exclusive fence is attached
Wait only on exclusive fences, interruptible with no timeout
v5: Move wait from do_mmio_flip to mmio_flip_work_func
Style tweaks to more closely match rest of file
v6: Change back to unintteruptible wait to match __i915_wait_request due to
inability to properly handle interrupted wait.
Warn on error code from waiting.
v7: No change
v8: Test for !reservation_object_signaled_rcu(test_all=FALSE) instead of
obj->base.dma_buf->resv->fence_excl
Link: https://patchwork.kernel.org/patch/7704181/
Signed-off-by: Alex Goins <agoins@nvidia.com>
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Use the first retired request on a new context to unpin
the old context. This ensures that the hw context remains
bound until it has been written back to by the GPU.
Now that the context is pinned until later in the request/context
lifecycle, it no longer needs to be pinned from context_queue to
retire_requests.
This fixes an issue with GuC submission where the GPU might not
have finished writing back the context before it is unpinned. This
results in a GPU hang.
v2: Moved the new pin to cover GuC submission (Alex Dai)
Moved the new unpin to request_retire to fix coverage leak
v3: Added switch to default context if freeing a still pinned
context just in case the hw was actually still using it
v4: Unwrapped context unpin to allow calling without a request
v5: Only create a switch to idle context if the ring doesn't
already have a request pending on it (Alex Dai)
Rename unsaved to dirty to avoid double negatives (Dave Gordon)
Changed _no_req postfix to __ prefix for consistency (Dave Gordon)
Split out per engine cleanup from context_free as it
was getting unwieldy
Corrected locking (Dave Gordon)
v6: Removed some bikeshedding (Mika Kuoppala)
Added explanation of the GuC hang that this fixes (Daniel Vetter)
v7: Removed extra per request pinning from ring reset code (Alex Dai)
Added forced ring unpin/clean in error case in context free (Alex Dai)
Signed-off-by: Nick Hoath <nicholas.hoath@intel.com>
Issue: VIZ-4277
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: David Gordon <david.s.gordon@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Alex Dai <yu.dai@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Alex Dai <yu.dai@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
For now, remove the spinlocks that protected the GuC's
statistics block and work queue; they are only accessed
by code that already holds the global struct_mutex, and
so are redundant (until the big struct_mutex rewrite!).
The specific problem that the spinlocks caused was that
if the work queue was full, the driver would try to
spinwait for one jiffy, but with interrupts disabled the
jiffy count would not advance, leading to a system hang.
The issue was found using test case igt/gem_close_race.
The new version will usleep() instead, still holding
the struct_mutex but without any spinlocks.
v4: Reorganize commit message (Dave Gordon)
v3: Remove unnecessary whitespace churn
v2: Clean up wq_lock too
v1: Clean up host2guc lock as well
Signed-off-by: Alex Dai <yu.dai@intel.com>
Reviewed-by: Dave Gordon <david.s.gordon@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1449104189-27591-1-git-send-email-yu.dai@intel.com
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
There's no need to stop and restart FBC, which is quite expensive as
we have to revalidate the CRTC state. After flushing a drawing
operation we know the CRTC state hasn't changed, so a nuke
(recompress) should be fine.
v2: Make it simpler (Chris).
v3: Rewrite the patch again due to patch order changes.
v4: Rewrite commit message (Chris).
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/
When running Cinnamon I see way too many pairs of these messages: many
per second. Get rid of them as they're just telling us FBC is working
as expected. We already have the messages for enable/disable, so we
don't really need messages for activation/deactivation.
v2: Rebase after changing the patch order.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/
Directly call intel_fbc_calculate_cfb_size() in the only place that
actually needs it, and use the proper check before removing the stolen
node. IMHO, this change makes our code easier to understand.
v2: Use drm_mm_node_allocated() (Chris).
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/
This was already on my TODO list, and was requested both by Chris and
Ville, for different reasons. The advantages are avoiding a frequent
malloc/free pair, and the locality of having the work structure
embedded in dev_priv. The maximum used memory is also smaller since
previously we could have multiple allocated intel_fbc_work structs at
the same time, and now we'll always have a single one - the one
embedded on dev_priv. Of course, we're now using a little more memory
on the cases where there's nothing scheduled.
The biggest challenge here is to keep everything synchronized the way
it was before.
Currently, when we try to activate FBC, we allocate a new
intel_fbc_work structure. Then later when we conclude we must delay
the FBC activation a little more, we allocate a new intel_fbc_work
struct, and then adjust dev_priv->fbc.fbc_work to point to the new
struct. So when the old work runs - at intel_fbc_work_fn() - it will
check that dev_priv->fbc.fbc_work points to something else, so it does
nothing. Everything is also protected by fbc.lock.
Just cancelling the old delayed work doesn't work because we might
just cancel it after the work function already started to run, but
while it is still waiting to grab fbc.lock. That's why we use the
"dev_priv->fbc.fbc_work == work" check described in the paragraph
above.
So now that we have a single work struct we have to introduce a new
way to synchronize everything. So we're making the work function a
normal work instead of a delayed work, and it will be responsible for
sleeping the appropriate amount of time itself. This way, after it
wakes up it can grab the lock, ask "were we delayed or cancelled?" and
then go back to sleep, enable FBC or give up.
v2:
- Spelling fixes.
- Rebase after changing the patch order.
- Fix ms/jiffies confusion.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> (v1)
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/
This moves the pre-gen4 check from update() to enable(). The HAS_DDI
in the original code is not needed since only gen 2/3 have the plane
swapping code.
v2: Rebase.
v3: Extract fbc_on_plane_a_only() (Chris).
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/
One of the problems with the current code is that it frees the CFB and
releases its drm_mm node as soon as we flip FBC's enable bit. This is
bad because after we disable FBC the hardware may still use the CFB
for the rest of the frame, so in theory we should only release the
drm_mm node one frame after we disable FBC. Otherwise, a stolen memory
allocation done right after an FBC disable may result in either
corrupted memory for the new owner of that memory region or corrupted
screen/underruns in case the new owner changes it while the hardware
is still reading it. This case is not exactly easy to reproduce since
we currently don't do a lot of stolen memory allocations, but I see
patches on the mailing list trying to expose stolen memory to user
space, so races will be possible.
I thought about three different approaches to solve this, and they all
have downsides.
The first approach would be to simply use multiple drm_mm nodes and
freeing the unused ones only after a frame has passed. The problem
with this approach is that since stolen memory is rather small,
there's a risk we just won't be able to allocate a new CFB from stolen
if the previous one was not freed yet. This could happen in case we
quickly disable FBC from pipe A and decide to enable it on pipe B, or
just if we change pipe A's fb stride while FBC is enabled.
The second approach would be similar to the first one, but maintaining
a single drm_mm node and keeping track of when it can be reused. This
would remove the disadvantage of not having enough space for two
nodes, but would create the new problem where we may not be able to
enable FBC at the point intel_fbc_update() is called, so we would have
to add more code to retry updating FBC after the time has passed. And
that can quickly get too complex since we can get invalidate, flush,
disable and other calls in the middle of the wait.
Both solutions above - and also the current code - have the problem
that we unnecessarily free+realloc FBC during invalidate+flush
operations even if the CFB size doesn't change.
The third option would be to move the allocation/deallocation to
enable/disable. This makes sure that the pipe is always disabled when
we allocate/deallocate the CFB, so there's no risk that the FBC
hardware may read or write to the memory right after it is freed from
drm_mm. The downside is that it is possible for user space to change
the buffer stride without triggering a disable/enable - only
deactivate/activate -, so we'll have to handle this case somehow - see
igt's kms_frontbuffer_tracking test, fbc-stridechange subtest. It
could be possible to implement a way to free+alloc the CFB during said
stride change, but it would involve a lot of book-keeping - exactly as
mentioned above - just for on case, so for now I'll keep it simple and
just deactivate FBC. Besides, we may not even need to disable FBC
since we do CFB over-allocation.
Note from Chris: "Starting a fullscreen client that covers a single
monitor in a multi-monitor setup will trigger a change in stride on
one of the CRTCs (the monitors will be flipped independently).". It
shouldn't be a huge problem if we lose FBC on multi-monitor setups
since these setups already have problems reaching deep PC states
anyway.
v2: Rebase after changing the patch order.
v3:
- Remove references to the stride change case being "uncommon" and
paste Chris' example.
- Rebase after a change in a previous patch.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/
The goal is to call FBC enable/disable only once per modeset, while
activate/deactivate/update will be called multiple times.
The enable() function will be responsible for deciding if a CRTC will
have FBC on it and then it will "lock" FBC on this CRTC: it won't be
possible to change FBC's CRTC until disable(). With this, all checks
and resource acquisition that only need to be done once per modeset
can be moved from update() to enable(). And then the update(),
activate() and deactivate() code will also get simpler since they
won't need to worry about the CRTC being changed.
The disable() function will do the reverse operation of enable(). One
of its features is that it should only be called while the pipe is
already off. This guarantees that FBC is stopped and nothing is
using the CFB.
With this, the activate() and deactivate() functions just start and
temporarily stop FBC. They are the ones touching the hardware enable
bit, so HW state reflects dev_priv->crtc.active.
The last function remaining is update(). A lot of times I thought
about renaming update() to activate() or try_to_activate() since it's
called when we want to activate FBC. The thing is that update() may
not only decide to activate FBC, but also deactivate or keep it on the
same state, so I'll leave this name for now.
Moving code to enable() and disable() will also help in case we decide
to move FBC to pipe_config or something else later.
The current patch only puts the very basic code on enable() and
disable(). The next commits will take care of moving more stuff from
update() to the new functions.
v2:
- Rebase.
- Improve commit message (Chris).
v3: Rebase after changing the patch order.
v4: Rebase again after upstream changes.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/
The long term goal is to have enable/disable as the higher level
functions and activate/deactivate as the lower level functions, just
like we do for PSR and for the CRTC. This way, we'll run enable and
disable once per modeset, while update, activate and deactivate will
be run many times. With this, we can move the checks and code that
need to run only once per modeset to enable(), making the code simpler
and possibly a little faster.
This patch is just the first step on the conversion: it starts by
converting the current low level functions from enable/disable to
activate/deactivate. This patch by itself has no benefits other than
making review and rebase easier. Please see the next patches for more
details on the conversion.
v2:
- Rebase.
- Improve commit message (Chris).
v3: Rebase after changing the patch order.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/
There's no need to reevaluate the status of every single crtc when a
single crtc changes its state.
With this, we're cutting the case where due to a change in pipe B,
intel_fbc_update() is called, then intel_fbc_find_crtc() concludes FBC
should be enabled on pipe A, then it completely rechecks the state of
pipe A only to conclude FBC should remain enabled on pipe A. If any
change on pipe A triggers a need to recompute whether FBC is valid on
pipe A, then at some point someone is going to call
intel_fbc_update(PIPE_A).
The addition of intel_fbc_deactivate() is necessary so we keep track
of the previously selected CRTC when we do invalidate/flush. We're
also going to continue the enable/disable/activate/deactivate concept
in the next patches.
v2: Rebase.
v3: Rebase after changing the patch order.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/
This thing where we need to get the crtc either from the work
structure or the fbc structure itself is confusing and unnecessary.
Set fbc.crtc right when scheduling the enable work so we can always
use it.
The problem is not what gets passed and how to retrieve it. The
problem is that when we're in the other parts of the code we always
have to keep in mind that if FBC is already enabled we have to get the
CRTC from place A, if FBC is scheduled we have to get the CRTC from
place B, and if it's disabled there's no CRTC. Having a single place
to retrieve the CRTC from allows us to treat the "is enabled" and "is
scheduled" cases as the same case, reducing the mistake surface. I
guess I should add this to the commit message.
Besides the immediate advantages, this is also going to make one of
the next commits much simpler. And even later, when we introduce
enable/disable + activate/deactivate, this will be even simpler as
we'll set the CRTC at enable time. So all the
activate/deactivate/update code can just look at the single CRTC
variable regardless of the current state.
v2: Improve commit message (Chris).
v3: Rebase after changing the patch order.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/
In function find_compression_threshold() we try to over-allocate CFB
space in order to reduce reallocations and fragmentation, and we're
not considering that at the CFB size check. Consider it.
There is also a longer-term plan to kill
dev_priv->fbc.uncompressed_size, but this will come later.
v2: Use drm_mm_node_allocated() (Chris).
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/
This can happen when we run out of encoders for a multi-crtc modeset,
or also when userspace is silly and tries to clone multiple connectors
that need the same encoder on the same crtc.
Reported-and-Tested-and-Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1449136154-11588-1-git-send-email-daniel.vetter@ffwll.ch
Proxy entries could have null pointer to net-device.
Signed-off-by: Konstantin Khlebnikov <koct9i@gmail.com>
Fixes: 84920c1420 ("net: Allow ipv6 proxies and arp proxies be shown with iproute2")
Signed-off-by: David S. Miller <davem@davemloft.net>
Dmitry Vyukov reported that the user could trigger a kernel warning by
using a large len value for getsockopt SCTP_GET_LOCAL_ADDRS, as that
value directly affects the value used as a kmalloc() parameter.
This patch thus switches the allocation flags from all user-controllable
kmalloc size to GFP_USER to put some more restrictions on it and also
disables the warn, as they are not necessary.
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
They don't need to be any bigger than that and with this we start a new
bitfield for tracking association runtime stuff, like zero window
situation.
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Acked-by: Vlad Yasevich <vyasevich@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch addresses multiple problems :
UDP/RAW sendmsg() need to get a stable struct ipv6_txoptions
while socket is not locked : Other threads can change np->opt
concurrently. Dmitry posted a syzkaller
(http://github.com/google/syzkaller) program desmonstrating
use-after-free.
Starting with TCP/DCCP lockless listeners, tcp_v6_syn_recv_sock()
and dccp_v6_request_recv_sock() also need to use RCU protection
to dereference np->opt once (before calling ipv6_dup_options())
This patch adds full RCU protection to np->opt
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
For large map->value_size the user space can trigger memory allocation warnings like:
WARNING: CPU: 2 PID: 11122 at mm/page_alloc.c:2989
__alloc_pages_nodemask+0x695/0x14e0()
Call Trace:
[< inline >] __dump_stack lib/dump_stack.c:15
[<ffffffff82743b56>] dump_stack+0x68/0x92 lib/dump_stack.c:50
[<ffffffff81244ec9>] warn_slowpath_common+0xd9/0x140 kernel/panic.c:460
[<ffffffff812450f9>] warn_slowpath_null+0x29/0x30 kernel/panic.c:493
[< inline >] __alloc_pages_slowpath mm/page_alloc.c:2989
[<ffffffff81554e95>] __alloc_pages_nodemask+0x695/0x14e0 mm/page_alloc.c:3235
[<ffffffff816188fe>] alloc_pages_current+0xee/0x340 mm/mempolicy.c:2055
[< inline >] alloc_pages include/linux/gfp.h:451
[<ffffffff81550706>] alloc_kmem_pages+0x16/0xf0 mm/page_alloc.c:3414
[<ffffffff815a1c89>] kmalloc_order+0x19/0x60 mm/slab_common.c:1007
[<ffffffff815a1cef>] kmalloc_order_trace+0x1f/0xa0 mm/slab_common.c:1018
[< inline >] kmalloc_large include/linux/slab.h:390
[<ffffffff81627784>] __kmalloc+0x234/0x250 mm/slub.c:3525
[< inline >] kmalloc include/linux/slab.h:463
[< inline >] map_update_elem kernel/bpf/syscall.c:288
[< inline >] SYSC_bpf kernel/bpf/syscall.c:744
To avoid never succeeding kmalloc with order >= MAX_ORDER check that
elem->value_size and computed elem_size are within limits for both hash and
array type maps.
Also add __GFP_NOWARN to kmalloc(value_size | elem_size) to avoid OOM warnings.
Note kmalloc(key_size) is highly unlikely to trigger OOM, since key_size <= 512,
so keep those kmalloc-s as-is.
Large value_size can cause integer overflows in elem_size and map.pages
formulas, so check for that as well.
Fixes: aaac3ba95e ("bpf: charge user for creation of BPF maps and programs")
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Marcin Wojtas says:
====================
Marvell Armada XP/370/38X Neta fixes
I'm sending v4 with corrected commit log of the last patch, in order to
avoid possible conflicts between the branches as suggested by Gregory
Clement.
Best regards,
Marcin Wojtas
Changes from v4:
* Correct commit log of patch 6/6
Changes from v2:
* Style fixes in patch updating mbus protection
* Remove redundant stable notifications except for patch 4/6
Changes from v1:
* update MBUS windows access protection register once, after whole loop
* add fixing value of MVNETA_RXQ_INTR_ENABLE_ALL_MASK
* add fixing error path for skb_build()
* add possibility of setting custom TX IP checksum limit in DT property
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
The Ethernet controller found in the Armada 38x SoC's family support
TCP/IP checksumming with frame sizes larger than 1600 bytes, however
only on port 0.
This commit enables it by setting 'tx-csum-limit' to 9800B in
'ethernet@70000' node.
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Since Armada 38x SoC can support IP checksum for jumbo frames only on
a single port, it means that this feature should be enabled per-port,
rather than for the whole SoC.
This patch enables setting custom TX IP checksum limit by adding new
optional property to the mvneta device tree node. If not used, by
default 1600B is set for "marvell,armada-370-neta" and 9800B for other
strings, which ensures backward compatibility. Binding documentation
is updated accordingly.
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In the actual RX processing, there is same error path for both descriptor
ring refilling and building skb fails. This is not correct, because after
successful refill, the ring is already updated with newly allocated
buffer. Then, in case of build_skb() fail, hitherto code left the original
buffer unmapped.
This patch fixes above situation by swapping error check of skb build with
DMA-unmap of original buffer.
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Acked-by: Simon Guinot <simon.guinot@sequanux.org>
Cc: <stable@vger.kernel.org> # v4.2+
Fixes a84e328941 ("net: mvneta: fix refilling for Rx DMA buffers")
Signed-off-by: David S. Miller <davem@davemloft.net>
A value originally defined in the driver was inappropriate. Even though
the ingress was somehow working, writing MVNETA_RXQ_INTR_ENABLE_ALL_MASK
to MVNETA_INTR_ENABLE didn't make any effect, because the bits [31:16]
are reserved and read-only.
This commit updates MVNETA_RXQ_INTR_ENABLE_ALL_MASK to be compliant with
the controller's documentation.
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Fixes: c5aff18204 ("net: mvneta: driver for Marvell Armada 370/XP network
unit")
Signed-off-by: David S. Miller <davem@davemloft.net>
MVNETA_RXQ_HW_BUF_ALLOC bit which controls enabling hardware buffer
allocation was mistakenly set as BIT(1). This commit fixes the assignment.
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Reviewed-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Fixes: c5aff18204 ("net: mvneta: driver for Marvell Armada 370/XP network
unit")
Signed-off-by: David S. Miller <davem@davemloft.net>
This commit adds missing configuration of MBUS windows access protection
in mvneta_conf_mbus_windows function - a dedicated variable for that
purpose remained there unused since v3.8 initial mvneta support. Because
of that the register contents were inherited from the bootloader.
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Reviewed-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Fixes: c5aff18204 ("net: mvneta: driver for Marvell Armada 370/XP network
unit")
Signed-off-by: David S. Miller <davem@davemloft.net>
There's one fix for the core here, we weren't reinitialising the actual
transferred length in messages when they get reused which meant that
we'd just keep adding to the length if a message is reused. This has
limited impact since it's only used in error handling cases but will
really mess anything that tries to use it up when it triggers.
As ever there's a small collection of driver specific fixes too.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJWXaUPAAoJECTWi3JdVIfQLPAH/1A6xgPcS2HnBu43ezzQu9JY
gd22gYN1+klyKhOzWv86zG0vbT/sVo2YCTb8nbnGfMCc5ghwlxF1fl2mCVXRTr2F
7dP2n43TmTNsuXQjO4tv2iJTkCXXuhoEr3DTrOu3+ywm+HNn1gUPgOl0i7zZpuIJ
AcQ0uuh33k5nQW3elj/gFcdjlVdHIq5oNpP4JBJeOn+ijsd/6DzVyW+TBiGnImWX
50XCuM+gUvj762y77T3rgORkWwkmJSQIjV+TNKXD/hIDvVWQK9OvtyMpGBOTe7nH
emEiy7BJnVIdpknUUVf0ZqjNpprxpLwU+ryQhVC9orP0dL64GlsbPkFOtIgxkDs=
=NlLa
-----END PGP SIGNATURE-----
Merge tag 'spi-fix-v4.4-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi
Pull spi fixes from Mark Brown:
"There's one fix for the core here, we weren't reinitialising the
actual transferred length in messages when they get reused which meant
that we'd just keep adding to the length if a message is reused. This
has limited impact since it's only used in error handling cases but
will really mess anything that tries to use it up when it triggers.
As ever there's a small collection of driver specific fixes too"
* tag 'spi-fix-v4.4-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi:
spi: bugfix: spi_message.transfer_length does not get reset
spi: pl022: handle EPROBE_DEFER for dma
spi: bcm63xx: use correct format string for printing a resource
spi: mediatek: single device does not require cs_gpios
spi: Add missing kerneldoc description for parameter
For cpufreq drivers which use setpolicy interface, after offline->online
the policy is set to default. This can be reproduced by setting the
default policy of intel_pstate or longrun to ondemand and then change to
"performance". After offline and online, the setpolicy will be called with
the policy=ondemand.
For drivers using governors this condition is handled by storing
last_governor, during offline and restoring during online. The same should
be done for drivers using setpolicy interface. Storing last_policy during
offline and restoring during online.
Signed-off-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
This function is like drm_modeset_lock_all(), but it takes the lock
acquisition context as a parameter rather than storing it in the DRM
device's mode_config structure.
Implement drm_modeset_{,un}lock_all() in terms of the new function for
better code reuse, and add a note to the kerneldoc that new code should
use the new functions.
v2: improve kerneldoc
v4: rename drm_modeset_lock_all_crtcs() to drm_modeset_lock_all_ctx()
and take mode_config's .connection_mutex instead of .mutex lock to
avoid lock inversion (Daniel Vetter), use drm_modeset_drop_locks()
which is now the equivalent of drm_modeset_unlock_all_ctx()
v5: do not take the dev->mode_config.connection_mutex in
drm_atomic_legacy_backoff() since drm_modeset_lock_all_ctx()
already keeps it, enhance kerneldoc for drm_modeset_lock_all_ctx()
(Daniel Vetter)
Signed-off-by: Thierry Reding <treding@nvidia.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1449075005-13937-1-git-send-email-thierry.reding@gmail.com
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
In the last change here, I neglected to update the cookie in one code
path: when a mgmt-tx has no real cookie sent to userspace as it doesn't
wait for a response, but is off-channel. The original code used the SKB
pointer as the cookie and always assigned the cookie to the TX SKB in
ieee80211_start_roc_work(), but my change turned this around and made
the code rely on a valid cookie being passed in.
Unfortunately, the off-channel no-wait TX path wasn't assigning one at
all, resulting in an uninitialized stack value being used. This wasn't
handed back to userspace as a cookie (since in the no-wait case there
isn't a cookie), but it was tested for non-zero to distinguish between
mgmt-tx and off-channel.
Fix this by assigning a dummy non-zero cookie unconditionally, and get
rid of a misleading comment and some dead code while at it. I'll clean
up the ACK SKB handling separately later.
Fixes: 3b79af973c ("mac80211: stop using pointers as userspace cookies")
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
DFS channels should not be actively scanned as we can't be sure
if we are allowed or not.
If the current channel is in the DFS band, active scan might be
performed after CSA, but we have no guarantee about other channels,
therefore it is safer to prevent active scanning at all.
Cc: stable@vger.kernel.org
Signed-off-by: Antonio Quartulli <antonio@open-mesh.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Interfaces are being initialized (setup) on addition,
and torn down on removal.
However, p2p device is being torn down when stopped,
resulting in the next p2p start operation being done
on uninitialized interface.
Solve it by calling ieee80211_teardown_sdata() only
on interface removal (for the non-netdev case).
Signed-off-by: Eliad Peller <eliadx.peller@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
[squashed in fix to call teardown after unregister]
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Sunil Goutham says:
====================
thunderx: miscellaneous fixes
This patch series contains fixes for various issues observed
with BGX and NIC drivers.
Changes from v1:
- Fixed comment syle in the first patch of the series
- Removed 'Increase transmit queue length' patch from the series,
will recheck if it's a driver or system issue and resubmit.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Enable or disable BGX LMAC's RX/TX based on corresponding VF's
status. If otherwise, when multiple LMAC's physical link is up
then packets from all LMAC's whose corresponding VF is not yet
initialized will get forwarded to VF0. This is due to VNIC's default
configuration where CPI, RSSI e.t.c point to VF0/QSET0/RQ0.
This patch will prevent multiple copies of packets on VF0.
Signed-off-by: Sunil Goutham <sgoutham@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Call netif_carrier_on() only if interface's link is up. Switching this on
upon IFF_UP by default, is causing issues with ethernet channel bonding
in LACP mode. Initial NETDEV_CHANGE notification was being skipped.
Also fixed some issues with link/speed/duplex reporting via ethtool.
Signed-off-by: Sunil Goutham <sgoutham@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Properly set CQ timer threshold and also set it to 2us.
With previous incorrect settings it was set to 0.5us which is too less.
Signed-off-by: Sunil Goutham <sgoutham@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
While VNIC or BGX driver teardown, wait for already scheduled delayed work to
finish before destroying it.
Signed-off-by: Thanneeru Srinivasulu <tsrinivasulu@caviumnetworks.com>
Signed-off-by: Sunil Goutham <sgoutham@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Thanneeru Srinivasulu <tsrinivasulu@caviumnetworks.com>
Signed-off-by: Sunil Goutham <sgoutham@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
As soon as we leave the spinlock after the job has been added to the job
queue, we can no longer rely on the job's data to be available.
I have seen a null-pointer dereference due to sched == NULL in
amd_sched_wakeup via amd_sched_entity_push_job and
amd_sched_ib_submit_kernel_helper. Since the latter initializes
sched_job->sched with the address of the ring scheduler, which is
guaranteed to be non-NULL, this race appears to be a likely culprit.
Signed-off-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
Bugzilla: https://bugs.freedesktop.org/attachment.cgi?bugid=93079
Reviewed-by: Christian König <christian.koenig@amd.com>
Missing error check if the operation failed.
Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
The SCSI sd driver probes SCSI devices asynchronously. The sd_remove()
function, called indirectly by device_del(), waits until asynchronous
probing has finished. Since the block layer queue must only be cleaned
up after probing has finished, device_del() has to be called before
blk_cleanup_queue(). Hence revert commit bf2cf3baa2.
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: James Bottomley <JBottomley@Odin.com>
Pull HID fixes from Jiri Kosina:
- regression fix for hid-lg driver from Benjamin Tissoires
- quirk for Logitech G710+ from Jimmy Berry
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/hid:
HID: lg: restrict filtering out of first interface to G29 only
HID: usbhid: add Logitech G710+ keyboard quirk NOGET
- Drop a redundant if-clause from Kconfig
- Fix a missing of_node_put() memory leak in the
Freescale i.MX driver
- Fix 64bit compilation of the Qualcomm SSBI driver.
- Fix a logic inversion in the Mediatek driver.
- Fix a compilation error for the odd one off in the
Super-H instance of the SH PFC driver.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJWXv4gAAoJEEEQszewGV1zl0AQAMljInkraH4CGdP7i3ynPYjn
BIDzbm6Hbgn89KTodYvwmufTvDsIcmMY8eIsm+LfaavHRhc3/ddznsIB1ow6MzAr
uwDtUSGvVw/lBnbsXAQw41at1fOihTZfeIgrXL1uhBL1TX7gKFthRbKp73hiCUnk
RkrwLu/vt1YbTFRbLW6PymV/9zNyV5Cm9iimfwlTRft3wmzs1gYD5w5s9b7grtNm
lCL1h+HBawuv+RSSqgHohc3zXqAkI+PfU1WZWpseIEZt5QfZ/31ogFvG/9bA0Ifq
WAc0cHDF7sPnv8jHNeJ4O0g2Kg8vyqeyfkDP2qREv2QmmNv0FF0o/HEGV9Efc6lL
hPQeOxcew7OKhJ+vTy/SA/wIlrZmbqk3Mw3j8yxoezuQAuQ5qGJqeLkWhMWrOsXr
UixxMBFhMW2BuNs9JScTm5+mUAf8t+nRY8dqEKdRlLReOy1FJ0Yd7ZJh+vBpMqmf
R7uh2DUsNrzwUfcF4XjGcb3wuEOECwW76CcOznCSPrG7XB3+nQG78Ia63IC4CfPs
9+CzGQWBVDZziSgKDsaNzBSyE4Ipfn2dxdBLRDuUU/lwg368Xp8Ac2vzyiMdOXi1
FIB/00U8dEBC7Nn2q/cumO+pVbYVjdzu51NsBR5QQHez7e3d+DlTumcOftEe7iTp
t6BXUesJY0rUmDLZtjf4
=f61M
-----END PGP SIGNATURE-----
Merge tag 'pinctrl-v4.4-2' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl
Pull pincontrol fixes from Linus Walleij:
"These are some v4.4 pin control fixes:
- Drop a redundant if-clause from Kconfig
- Fix a missing of_node_put() memory leak in the Freescale i.MX
driver
- Fix 64bit compilation of the Qualcomm SSBI driver.
- Fix a logic inversion in the Mediatek driver.
- Fix a compilation error for the odd one off in the Super-H instance
of the SH PFC driver"
* tag 'pinctrl-v4.4-2' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl:
pinctrl: sh-pfc: sh7734: Add missing cfg macro parameter to fix build
pinctrl: mediatek: Add get_direction support.
pinctrl: fix qcom ssbi drivers for 64-bit compilation
pinctrl: imx1-core: add missing of_node_put
pinctrl: remove redundant if conditional from Kconfig
- Fix a bunch of possible NULL references found by Coccinelle
jockeys.
- Stop creating Tegra's debugfs on everything and it's dog.
This is an ARM multiplatform kernel issue.
- Fix an oops in gpiolib for NULL names on named GPIOs.
- Fix a complex OMAP1 bug in the OMAP driver.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJWXvnbAAoJEEEQszewGV1z4yAQAMu8FON3frlLkXdgAs3CSl1v
sBbtcI9T0Mda9cp78rUAo2diP5EnI13wRiQDCsf/9LI85//cE4fVXDIES6AEgrvf
CV6VAI5XONKhN2o0DDRqCiIYNLG22GDH1B4Z0w77q0pr/aUavGJex9Jb9Cp36yUR
qERPgEdtbTqdI3RnzatBTwNB4j23NMb5sa7A8X3H6Fe+e3ieEuQiYmBdQqhoF8zV
f3l/EaU8TDS7Zm2CGGlB79jHmHqTpJ4/3qmMbaR0pQfO1MftRRS4xuS8ca73Qjt8
tc2uL20Z724H+2nGc2UZfmDkAL50KyWKRHPGYMVkJAlxEqeWqy4+9OQZC17YnedA
JemVFfdpLBhObCXHhrwwL9uif0ViPW+IB/RfCJ0h8ta0w5VsxHAeOD4l1v28vALX
kp4bAQExXU2pLDPlerCvXHDurJpESJDihFloII75brww+3xsaLiodhNLGiDlfctS
OD1Eskm+3Lk8w7pDrEdsMK209F8T4Vfvu6/8qoyQmFf4fHgWm4y7+vPn5fNWRB9H
qZPmveCqi8EF75lIQF5/+OliqMlFMKFGe+HrBNP9kmn3LBuI0u2+vjIHuOU8pr6m
quwKgjzaWWwltyXwptl2CZFPz45cvAKiykVWKjIWuvC3leeoGdgV/OPJ8uZca42b
F+8dRp/ppHkhN84pZl3d
=aggl
-----END PGP SIGNATURE-----
Merge tag 'gpio-v4.4-2' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio
Pull GPIO fixes from Linus Walleij:
"Some GPIO fixes for the v4.4 series:
- Fix a bunch of possible NULL references found by Coccinelle
jockeys.
- Stop creating Tegra's debugfs on everything and its dog. This is
an ARM multiplatform kernel issue.
- Fix an oops in gpiolib for NULL names on named GPIOs.
- Fix a complex OMAP1 bug in the OMAP driver"
* tag 'gpio-v4.4-2' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio:
gpio: omap: drop omap1 mpuio specific irq_mask/unmask callbacks
gpiolib: fix oops, if gpio name is NULL
gpio-tegra: Do not create the debugfs entry by default
gpio: palmas: fix a possible NULL dereference
gpio: syscon: fix a possible NULL dereference
gpio: 74xx: fix a possible NULL dereference
After 614732eaa1, no refcount is maintained for the vport-vxlan module.
This allows the userspace to remove such module while vport-vxlan
devices still exist, which leads to later oops.
v1 -> v2:
- move vport 'owner' initialization in ovs_vport_ops_register()
and make such function a macro
Fixes: 614732eaa1 ("openvswitch: Use regular VXLAN net_device device")
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
There are platforms that don't need the full GMBUS power domain (BXT)
while others do (PCH, VLV/CHV). For optimizing this we would need to add
a new power domain, but it's not clear how much we would benefit given
the short time we hold the reference. So for now let's keep things
simple.
This fixes a regression introduced in
commit 237ed86c69
Author: Sonika Jindal <sonika.jindal@intel.com>
Date: Tue Sep 15 09:44:20 2015 +0530
drm/i915: Check live status before reading edid
v2:
- fix commit message, PCH won't take any redundant power resource after
this change (Ville)
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
[fix commit message in v2 (Imre)]
[Cherry-picked from drm-intel-next-queued 29bb94bb (Imre)]
Signed-off-by: Imre Deak <imre.deak@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1448643329-18675-6-git-send-email-imre.deak@intel.com
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
MISSING_CASE() would have been useful to track down a recent problem in
intel_display_port_aux_power_domain(), so add it there and a few related
helpers. This was also suggested by Ville in his review of the latest
DMC/DC changes, we forgot to address that.
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Patrik Jakobsson <patrik.jakobsson@linux.intel.com>
[Cherry-picked from drm-intel-next-queued b9fec167 (Imre)]
Signed-off-by: Imre Deak <imre.deak@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1448643329-18675-5-git-send-email-imre.deak@intel.com
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Due to the current sharing of the DDI encoder between DP and HDMI
connectors we can run the DP detection after the HDMI detection has
already set the shared encoder's type. For now solve this keeping the
current behavior and running the detection in this case too. For a proper
solution Ville suggested to split the encoder into an HDMI and DP one, that
can be done as a follow-up.
This issue triggers the WARN in intel_display_port_aux_power_domain() and
was introduced in:
commit 25f78f58e5
Author: Ville Syrjälä <ville.syrjala@linux.intel.com>
Date: Mon Nov 16 15:01:04 2015 +0100
drm/i915: Clean up AUX power domain handling
CC: Patrik Jakobsson <patrik.jakobsson@linux.intel.com>
CC: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Patrik Jakobsson <patrik.jakobsson@linux.intel.com>
[Cherry-picked from drm-intel-next-queued 651174a4 (Imre)]
Signed-off-by: Imre Deak <imre.deak@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1448643329-18675-4-git-send-email-imre.deak@intel.com
Signed-off-by: Jani Nikula <jani.nikula@intel.com>