Surprisingly kbuild can't cope with tristates in the
<module>-$(CONFIG_FOO) pattern. This patch hacks up a solution.
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Reported-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Michal Marek <mmarek@suse.com>
Cc: linux-kbuild@vger.kernel.org
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
The exynos_drm_gem_mmap_buffer() is not used outside so make it static.
Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
fimd_dp_clock_enable() was setting the always to enabled,
this patch fix this to actually use the value that is set to 'val'.
Reported-by: Emilio López <emilio.lopez@collabora.co.uk>
Signed-off-by: Gustavo Padovan <gustavo.padovan@collabora.co.uk>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
This patch removes unnecessary pm suspend/resume functions.
All kms sub drivers will be controlled by top of Exynos drm driver
and connector dpms so these sub drivers shouldn't have their own
pm interfaces.
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Reviewed-by: Gustavo Padovan <gustavo.padovan@collabora.co.uk>
When disabling/enabling a crtc the primary area must be updated
independently of which crtc has been disabled/enabled.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1264735
Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
Cc: stable@vger.kernel.org
Signed-off-by: Dave Airlie <airlied@redhat.com>
A very tiny temporal window exists in the residue calculation where :
- upon entering residue calculation, the transfer is ongoing
- when reading the current transfer pointer, it just changed to
the "finisher/linker" descriptor
In this case, the residue returned is the whole transfer length instead
of 0. Fix it.
This appears almost in one extreme case, where the driver is used
by older clients which inquire for residue in interrupt context, such
as the smsc91x ethernet driver, in a tight loop :
interrupt_handler()
dmaengine_submit()
do {
dmaengine_tx_status()
} while (residue > 0 || status != DMA_ERROR)
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
A very small number of devices don't use the flow control offered by
requestor lines. In these specific cases, the pxa dma driver should be
aware of that and not try to use a requestor line.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The valid pchan range is 0 ~ d->dma_requests - 1.
Signed-off-by: Axel Lin <axel.lin@ingics.com>
Reviewed-by: Jun Nie <jun.nie@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When putting back a descriptor to the free descs list, some fields are
not set to 0, it can cause bugs if someone uses it without having this
in mind.
Descriptor are not put back one by one so it is easier to clean
descriptors when we request them.
Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Cc: stable@vger.kernel.org #4.2
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The addressing mode we were using was not only incrementing the address at
each microblock, but also at each data boundary, which was severely slowing
the transfer, without any benefit since we were not using the data stride.
Switch to the micro block increment only in order to get back to an
acceptable performance level.
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Fixes: 6007ccb577 ("dmaengine: xdmac: Add interleaved transfer support")
Cc: stable@vger.kernel.org #4.2
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Dmitry Vyukov reported the following using trinity and the memory
error detector AddressSanitizer
(https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel).
[ 124.575597] ERROR: AddressSanitizer: heap-buffer-overflow on
address ffff88002e280000
[ 124.576801] ffff88002e280000 is located 131938492886538 bytes to
the left of 28857600-byte region [ffffffff81282e0a, ffffffff82e0830a)
[ 124.578633] Accessed by thread T10915:
[ 124.579295] inlined in describe_heap_address
./arch/x86/mm/asan/report.c:164
[ 124.579295] #0 ffffffff810dd277 in asan_report_error
./arch/x86/mm/asan/report.c:278
[ 124.580137] #1 ffffffff810dc6a0 in asan_check_region
./arch/x86/mm/asan/asan.c:37
[ 124.581050] #2 ffffffff810dd423 in __tsan_read8 ??:0
[ 124.581893] #3 ffffffff8107c093 in get_wchan
./arch/x86/kernel/process_64.c:444
The address checks in the 64bit implementation of get_wchan() are
wrong in several ways:
- The lower bound of the stack is not the start of the stack
page. It's the start of the stack page plus sizeof (struct
thread_info)
- The upper bound must be:
top_of_stack - TOP_OF_KERNEL_STACK_PADDING - 2 * sizeof(unsigned long).
The 2 * sizeof(unsigned long) is required because the stack pointer
points at the frame pointer. The layout on the stack is: ... IP FP
... IP FP. So we need to make sure that both IP and FP are in the
bounds.
Fix the bound checks and get rid of the mix of numeric constants, u64
and unsigned long. Making all unsigned long allows us to use the same
function for 32bit as well.
Use READ_ONCE() when accessing the stack. This does not prevent a
concurrent wakeup of the task and the stack changing, but at least it
avoids TOCTOU.
Also check task state at the end of the loop. Again that does not
prevent concurrent changes, but it avoids walking for nothing.
Add proper comments while at it.
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Based-on-patch-from: Wolfram Gloger <wmglo@dent.med.uni-muenchen.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Borislav Petkov <bp@alien8.de>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Andrey Konovalov <andreyknvl@google.com>
Cc: Kostya Serebryany <kcc@google.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: kasan-dev <kasan-dev@googlegroups.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Wolfram Gloger <wmglo@dent.med.uni-muenchen.de>
Cc: stable@vger.kernel.org
Link: http://lkml.kernel.org/r/20150930083302.694788319@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
pm_runtime_enable is called in probe to enable runtime PM
for wm8962 codec, but pm_runtime_disable isn't called in remove
callback, nor is called in error path if probe fails after runtime
PM is enabled, this causes unbalanced pm_runtime_enable.
This patch Adds pm_runtime_disable in remove callback and error path,
to balance pm_runtime_enable.
Signed-off-by: Jiada Wang <jiada_wang@mentor.com>
Acked-by: Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Pull RCU fixes from Ingo Molnar:
"Two RCU fixes:
- work around bug with recent GCC versions.
- fix false positive lockdep splat"
* 'core-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
rcu: Suppress lockdep false positive for rcp->exp_funnel_mutex
rcu: Change _wait_rcu_gp() to work around GCC bug 67055
As reported by Dmitry Vyukov, we really shouldn't do ipc_addid() before
having initialized the IPC object state. Yes, we initialize the IPC
object in a locked state, but with all the lockless RCU lookup work,
that IPC object lock no longer means that the state cannot be seen.
We already did this for the IPC semaphore code (see commit e8577d1f03:
"ipc/sem.c: fully initialize sem_array before making it visible") but we
clearly forgot about msg and shm.
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Cc: Manfred Spraul <manfred@colorfullife.com>
Cc: Davidlohr Bueso <dbueso@suse.de>
Cc: stable@vger.kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
It was added for completeness, but we don't have any users
for it yet. Daniel noted that it may be racy. Remove it.
Change-Id: I5f5546f8911a4f294008a62dc86a73f3face38d1
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This patch adds a compatible string to support for R-Car H3.
Since the HS-USB controller of R-Car H3 is almost the same specification
with R-Car Gen2 (these have 16 pipes and usb-dmac), this patch
sets the "type" of renesas_usbhs_driver_param to USBHS_TYPE_RCAR_GEN2.
Acked-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Signed-off-by: Felipe Balbi <balbi@ti.com>
This patch fixes the following warning if 64-bit architecture environment:
./drivers/usb/renesas_usbhs/common.c:496:25: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]
dparam->type = of_id ? (u32)of_id->data : 0;
Acked-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Signed-off-by: Felipe Balbi <balbi@ti.com>
If dma_pool_alloc() fails we are jumping to fail and releasing all the
bd_tables which have been added to the chain but we missed freeing this
bd_table which was just allocated and still not added to the chain of
bd_table.
Signed-off-by: Sudip Mukherjee <sudip@vectorindia.org>
Signed-off-by: Felipe Balbi <balbi@ti.com>
The CONFIG_MIPS_MT symbol can be selected by CONFIG_MIPS_VPE_LOADER in
addition to CONFIG_MIPS_MT_SMP. We only want MT code in the CPS SMP boot
vector if we're using MT for SMP. Thus switch the config symbol we ifdef
against to CONFIG_MIPS_MT_SMP.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: <stable@vger.kernel.org> # 3.16+
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/10867/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The MT-specific code in mips_cps_boot_vpes can safely be omitted from
kernels which don't support MT, with the default VPE==0 case being used
as it would be after the has_mt (Config3.MT) check failed at runtime.
Discarding the code entirely will save us a few bytes & allow cleaner
handling of MT ASE instructions by later patches.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: <stable@vger.kernel.org> # 3.16+
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/10866/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The has_mt macro ended with a branch, leaving its callers with a delay
slot that would be executed if Config3.MT is not set. However it would
not be executed if Config3 (or earlier Config registers) don't exist
which makes it somewhat inconsistent at best. Fill the delay slot in the
macro & fix the mips_cps_boot_vpes caller appropriately.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: <stable@vger.kernel.org> # 3.16+
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/10865/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
v2: Don't forget to actually check the cstate->active value when
tallying up the number of active CRTC's. (Ander)
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We already ensure that pstate->visible = false when crtc->active = false
during runtime programming; make sure we follow the same logic when
reading out initial hardware state.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Calculate pipe watermarks during atomic calculation phase, based on the
contents of the atomic transaction's state structure. We still program
the watermarks at the same time we did before, but the computation now
happens much earlier.
While this patch isn't too exciting by itself, it paves the way for
future patches. The eventual goal (which will be realized in future
patches in this series) is to calculate multiple sets up watermark
values up front, and then program them at different times (pre- vs
post-vblank) on the platforms that need a two-step watermark update.
While we're at it, s/intel_compute_pipe_wm/ilk_compute_pipe_wm/ since
this function only applies to ILK-style watermarks and we have a
completely different function for SKL-style watermarks.
Note that the original code had a memcmp() in ilk_update_wm() to avoid
calling ilk_program_watermarks() if the watermarks hadn't changed. This
memcmp vanishes here, which means we may do some unnecessary result
generation and merging in cases where watermarks didn't change, but the
lower-level function ilk_write_wm_values already makes sure that we
don't actually try to program the watermark registers again.
v2: Squash a few commits from the original series together; no longer
leave pre-calculated wm's in a separate temporary structure since
it's easier to follow the logic if we just cut over to using the
pre-calculated values directly.
v3:
- Pass intel_crtc instead of drm_crtc to .compute_pipe_wm() entrypoint
and use intel_atomic_get_crtc_state() to avoid need for extra
casting. (Ander)
- Drop unused intel_check_crtc() function prototype. (Ander)
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
A future patch will calculate these during the atomic 'check' phase
rather than at WM programming time, so let's store the watermark
values we're planning to use in the CRTC state; the values actually
active on the hardware remains in intel_crtc.
While we're at it, do some minor restructuring to keep ILK and SKL
values in a union.
v2: Don't move cxsr_allowed to state (Maarten)
v3: Only calculate watermarks in state. Still keep active watermarks in
intel_crtc itself. (Ville)
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Split ilk_update_wm() into two parts; one doing the programming
and the other the calculations.
v2: Fix typo in commit message
v3 (by Matt): Heavily rebased for current codebase.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The only platform that still has an update_sprite_wm entrypoint is SKL;
on SKL, intel_update_sprite_watermarks just updates intel_plane->wm and
then performs a regular watermark update. However intel_plane->wm is
only used to update a couple fields in intel_wm_config, and those fields
are never used by the SKL code, so on SKL an update_sprite_wm is
effectively identical to an update_wm call. Since we're already
ensuring that the regular intel_update_wm is called any time we'd try to
call intel_update_sprite_watermarks, the whole call is redundant and can
be dropped.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Determine whether we need to apply this workaround at atomic check time
and just set a flag that will be used by the main watermark update
routine.
Moving this workaround into the atomic framework reduces
ilk_update_sprite_wm() to just a standard watermark update, so drop it
completely and just ensure that ilk_update_wm() is called whenever a
sprite plane is updated in a way that would affect watermarks.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Just pull the info out of the state structures rather than staging
it in an additional set of structures. To make this more
straightforward, we change the signature of several internal WM
functions to take the crtc state as a parameter.
v2:
- Don't forget to skip cursor planes on a loop in the DDB allocation
function to match original behavior. (Ander)
- Change a use of intel_crtc->active to cstate->active. They should
be identical, but it's better to be consistent. (Ander)
- Rework more function signatures to pass states rather than crtc for
consistency. (Ander)
v3:
- Add missing "+ 1" to skl_wm_plane_id()'s 'overlay' case. (Maarten)
- Packed formats should pass '0' to drm_format_plane_cpp(), not 1.
(Maarten)
- Drop unwanted WARN_ON() for disabled planes when calculating data
rate for SKL. (Maarten)
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
A bunch of SKL watermark-related structures have the cursor plane as a
separate entry from the rest of the planes. Since a previous patch
updated I915_MAX_PLANES such that those plane arrays now have a slot for
the cursor, update the code to use the new slot in the existing plane
arrays and kill off the cursor-specific structures.
There shouldn't be any functional change here; this is just shuffling
around how the data is stored in some of the data structures. The whole
patch is generated with Coccinelle via the following semantic patch:
@@ struct skl_pipe_wm_parameters WMP; @@
- WMP.cursor
+ WMP.plane[PLANE_CURSOR]
@@ struct skl_pipe_wm_parameters *WMP; @@
- WMP->cursor
+ WMP->plane[PLANE_CURSOR]
@@ @@
struct skl_pipe_wm_parameters {
...
- struct intel_plane_wm_parameters cursor;
...
};
@@
struct skl_ddb_allocation DDB;
expression E;
@@
- DDB.cursor[E]
+ DDB.plane[E][PLANE_CURSOR]
@@
struct skl_ddb_allocation *DDB;
expression E;
@@
- DDB->cursor[E]
+ DDB->plane[E][PLANE_CURSOR]
@@ @@
struct skl_ddb_allocation {
...
- struct skl_ddb_entry cursor[I915_MAX_PIPES];
...
};
@@
struct skl_wm_values WMV;
expression E1, E2;
@@
(
- WMV.cursor[E1][E2]
+ WMV.plane[E1][PLANE_CURSOR][E2]
|
- WMV.cursor_trans[E1]
+ WMV.plane_trans[E1][PLANE_CURSOR]
)
@@
struct skl_wm_values *WMV;
expression E1, E2;
@@
(
- WMV->cursor[E1][E2]
+ WMV->plane[E1][PLANE_CURSOR][E2]
|
- WMV->cursor_trans[E1]
+ WMV->plane_trans[E1][PLANE_CURSOR]
)
@@ @@
struct skl_wm_values {
...
- uint32_t cursor[I915_MAX_PIPES][8];
...
- uint32_t cursor_trans[I915_MAX_PIPES];
...
};
@@ struct skl_wm_level WML; @@
(
- WML.cursor_en
+ WML.plane_en[PLANE_CURSOR]
|
- WML.cursor_res_b
+ WML.plane_res_b[PLANE_CURSOR]
|
- WML.cursor_res_l
+ WML.plane_res_l[PLANE_CURSOR]
)
@@ struct skl_wm_level *WML; @@
(
- WML->cursor_en
+ WML->plane_en[PLANE_CURSOR]
|
- WML->cursor_res_b
+ WML->plane_res_b[PLANE_CURSOR]
|
- WML->cursor_res_l
+ WML->plane_res_l[PLANE_CURSOR]
)
@@ @@
struct skl_wm_level {
...
- bool cursor_en;
...
- uint16_t cursor_res_b;
- uint8_t cursor_res_l;
...
};
v2: Use a PLANE_CURSOR enum entry rather than making the code reference
I915_MAX_PLANES or I915_MAX_PLANES+1, which was confusing. (Ander)
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Let the compiler figure out what I915_MAX_PLANES is from 'enum plane' so
that we don't need a separate #define.
While we're at it, add the cursor plane to the enum. This will cause
I915_MAX_PLANES to now include the cursor plane in its count (it didn't
previously). This change is safe since we currently only use this
value in array declarations (never in the actual code logic); we just
wind up allocating slightly more memory than we need to. A followup
patch will cause various parts of the code to start using the extra
array element where appropriate.
(This patch probably should have been squashed with the followup patch,
but I couldn't figure out how to get Coccinelle to modify enum
declarations...)
Suggested-by: Ander Conselvan De Oliveira <conselvan2@gmail.com>
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Just pull the info out of the CRTC state structure rather than staging
it in an additional structure.
Note that we use cstate->active rather than intel_crtc->active which may
appear to be a change in behavior. However since we're no longer trying
to recalculate watermarks during the "pipe off" stage of a modeset,
intel_crtc->active and cstate->active should always be identical when
watermarks are calculated (at least for ILK-style platforms).
v2: Clarify reasoning for cstate->active and add a WARN_ON to the code
to assert that it really is always identical to intel_crtc->active
as expected.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Just pull the info out of the plane state structure rather than staging
it in an additional structure.
v2: Add 'visible' condition to sprites_scaled so that we don't limit the
WM level when the sprite isn't enabled. (Ville)
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by(v1): Ander Conselvan de Oliveira <conselvan2@gmail.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
In commit
commit e4ca061275
Author: Patrik Jakobsson <patrik.jakobsson@linux.intel.com>
Date: Wed Jul 8 15:31:52 2015 +0200
drm/i915: Don't forget to mark crtc as inactive after disable
we added extra watermark updates to all of the .crtc_disable()
entrypoints to avoid problems problems with system resume on SKL. Those
disable entrypoints are currently called in just two places in the
driver: intel_atomic_commit (i.e., during a modeset) and
intel_crtc_disable_noatomic (which is called during hardware readout).
It seems that this extra watermark recalculation should only be
important in the latter case (which happens during a resume operation);
the former case should always have appropriate watermark programming
happening at other points in the modeset sequence.
Let's move the watermark update out of the .crtc_disable() entrypoints
and place it directly in intel_crtc_disable_noatomic() so that it only
happens on S3 resume and not during a regular modeset (since the
existing watermark handling should properly update watermarks during
normal atomic commits).
Cc: Patrik Jakobsson <patrik.jakobsson@linux.intel.com>
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The RC6 residency time unit is 833.33ns on BXT according to the
specification, so update the calculation accordingly. Use the same way
as CHV/VLV to divide by the corresponding frequency, as I think this is
the more natural unit for what the HW does internally.
v2:
- add missing IS_BROXTON check (Ville)
Testcase: igt/pm_rc6_residency
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
GuC expects two bits for Render and Media domain separately when
driver sends data via host2guc SAMPLE_FORCEWAKE. Bit 0 is for
Render and bit 1 is for Media domain.
v2: Keep sync with code for WaRsDoubleRc6WrlWithCoarsePowerGating
v1: Add parameters definition to avoid magic value
Signed-off-by: Alex Dai <yu.dai@intel.com>
Reviewed-by: Tom O'Rourke <Tom.O'Rourke@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
If there is a DMA zone (usually 24bit = 16MB I believe), but no DMA32
zone, as is the case for some 32-bit kernels, then massage_gfp_flags()
will cause DMA memory allocated for devices with a 32..63-bit
coherent_dma_mask to fall back to using __GFP_DMA, even though there may
only be 32-bits of physical address available anyway.
Correct that case to compare against a mask the size of phys_addr_t
instead of always using a 64-bit mask.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Fixes: a2e715a86c ("MIPS: DMA: Fix computation of DMA flags from device's coherent_dma_mask.")
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # 2.6.36+
Patchwork: https://patchwork.linux-mips.org/patch/9610/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The calculation for the SMT scaling factor for a hardware thread
which has been partially idle needs to disregard the cycles spent
by the other threads of the core while the thread is idle.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
This was only used for the ums+gem combo, so ripe for removal now that
we only have kms code left.
v2: Drop fence_reg_start since it's now unused, noticed by Ville.
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Acked-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
drm_kms_helper_poll_enable() is called from a context in
intel_hpd_irq_storm_disable() where the the mode_config mutex is
already locked.
When this function was converted to lock this mutex in
commit 8c4ccc4ab6 ("drm/probe-helper: Grab mode_config.mutex
in poll_init/enable") a deadlock occurred.
Call the newly implemented non-locking version of this function.
Changes since v1:
- use function name suffix '_locked' for the function that
is to be called from a locked context.
Signed-off-by: Egbert Eich <eich@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
drm_kms_helper_poll_enable() was converted to lock the mode_config
mutex in commit 8c4ccc4ab6
("drm/probe-helper: Grab mode_config.mutex in poll_init/enable").
This disregarded the cases where this function is called from a context
where this mutex is already locked.
Add a non-locking version as well.
Changes since v1:
- use function name suffix '_locked' for the function that
is to be called from a locked context.
Signed-off-by: Egbert Eich <eich@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
When get a CRC error, start the mmc_retune, it will issue CMD19/CMD21
to do tune, assume there were 10 clock phase need to try, phase 0 to
phase 6 is ok, phase 7 to phase 9 is NG, we try it from 0 to 9, so
the last CMD19/CMD21 will get CRC error, host->need_retune was set and
cause mmc_retune was called, then dead loop of mmc_retune
Signed-off-by: Chaotian Jing <chaotian.jing@mediatek.com>
Acked-by: Adrian Hunter <adrian.hunter@intel.com>
Fixes: bd11e8bd03 ("mmc: core: Flag re-tuning is needed on CRC errors")
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
When we're out of command buffer space, we turn on the command buffer
processed irq without re-checking for finished command buffers afterwards.
This might lead to a missed irq and the command submission process waiting
forever for space.
Fix this by rerunning the command buffer submission handler whenever we're
out of command space. This ensures both that we don't needlessly turn on
the irq, and that if we decide to turn on the irq, we recheck for finished
command buffers before going to sleep.
Reported-and-tested-by: Bryan Li <ldexin@vmware.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Sinclair Yeh <syeh@vmware.com>