This patch rewrites destroy_queue_nocpsch() as the current logic that is
implemented in the function is completely flawed.
This function is used only in non-HWS mode.
Signed-off-by: Ben Goz <ben.goz@amd.com>
Signed-off-by: Oded Gabbay <oded.gabbay@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
The MQDs for CI and VI are different. Therefore, the MQD manager module need to
be H/W specific.
This patch splits the current MQD manager into three files:
- kfd_mqd_manager.c, which contains common functions and initializes the
specific mqd manager module according to the H/W
- kfd_mqd_manager_cik.c, which contains Kaveri specific functions. This is
basically the old kfd_mqd_manager.c
- kfd_mqd_manager_vi.c, which will contain VI specific functions. Currently it
is not implemented except for returning NULL on initialization.
Signed-off-by: Ben Goz <ben.goz@amd.com>
Signed-off-by: Oded Gabbay <oded.gabbay@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
This patch adds a new property to kfd_device_info structure. That structure
holds information that is H/W specific.
The new property is called asic_family and its purpose is to distinguish
between different asic families in amdkfd operations, mainly in QCM (queue
control & management)
This patch also adds a new enum, to select different ASICs. We set the current
kfd_device_info instance as Kaveri and create a new instance which describes
the new AMD APU, codenamed 'Carrizo'.
Signed-off-by: Ben Goz <ben.goz@amd.com>
Signed-off-by: Oded Gabbay <oded.gabbay@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
As the MQD types are common across all AMD GPUs/APUs, let's remove the CIK part
from the name.
Signed-off-by: Ben Goz <ben.goz@amd.com>
Signed-off-by: Oded Gabbay <oded.gabbay@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
This patch adds new fields to the queue_properties structure. The new fields
are relevant only for queues running on AMD GPU VI architecture.
The eop_ring_buffer_address and eop_ring_buffer_size describe an
end-of-pipe queue which is assigned to the MQD. In CI, the EOP queue was per
pipeline and in VI it is per queue.
The ctx_save_restore_area_address and ctx_save_restore_area_size describe a
memory area that is designated to allow the CP to do context save/restore in
mid-wave state.
This patch also modifies the set_queue_properties_from_user() (called from
kfd_ioctl_create_queue()) to check and copy those new parameters.
Signed-off-by: Ben Goz <ben.goz@amd.com>
Signed-off-by: Oded Gabbay <oded.gabbay@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Because amdkfd will need to work both with radeon and amdgpu, don't include
header files that are in radeon's folder.
Instead, use the common amd include folder and move amdkfd specific defines to
amdkfd header files.
Signed-off-by: Oded Gabbay <oded.gabbay@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
This patch creates a new file, cik_structs.h, and puts the cik_mqd and
cik_sdma_rlc_registers structures in that file.
The new file is placed in a common include folder under the drm/amd folder, so
it will be shared among all amd drm drivers.
Signed-off-by: Ben Goz <ben.goz@amd.com>
Signed-off-by: Oded Gabbay <oded.gabbay@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
This patch removes a call to kfd-->kgd interface function that is doing H/W
initialization. That function is moved into radeon to be part of the common
H/W initialization sequence. The interface function will be deleted.
Signed-off-by: Ben Goz <ben.goz@amd.com>
Signed-off-by: Oded Gabbay <oded.gabbay@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
This patch moves to radeon the initialization of compute vmid.
That initializations was done in kfd-->kgd interface, but doing it in radeon
as part of radeon's H/W initialization routines is more appropriate.
In addition, this simplifies the kfd-->kgd interface.
The patch removes the function from the interface file and from the interface
declaration file.
The function initializes memory apertures to fixed base/limit address and non
cached memory types.
Signed-off-by: Ben Goz <ben.goz@amd.com>
Signed-off-by: Oded Gabbay <oded.gabbay@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Make sure the cursor gets fully clipped when enabling it on a disabled
crtc via setplane. This will prevent the lower level code from
attempting to enable the cursor in hardware.
Cc: Paulo Zanoni <przanoni@gmail.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: stable@vger.kernel.org
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
During suspend we turn off the crtcs, but leave the staged config in
place so that we can restore the display(s) to their previous state on
resume.
During resume when we attempt to apply the force pipe A quirk we use the
load detect mechanism. That doesn't check whether there was an already
staged configuration for the crtc since that's not even possible during
normal runtime load detection. But during resume it is possible, and if
we just blindly go and overwrite the staged crtc configuration for the
load detection we can no longer restore the display to the correct
state.
Even worse, we don't even clear all the staged connector->encoder->crtc
links so we may end up using a cloned setup for the load detection, and
after we're done we just clear the links related to the VGA output
leaving the links for the other outputs in place. This will eventually
result in calling intel_set_mode() with mode==NULL but with valid
connector->encoder->crtc links which will result in dereferencing the
NULL mode since the code thinks it will have to a modeset.
To avoid these problems don't use any crtc with new_enabled==true for
load detection.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: stable@vger.kernel.org (for 3.16)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
intel_enable_pipe_a() gets called with all the modeset locks already
held (by drm_modeset_lock_all()), so trying to grab the same
locks using another drm_modeset_acquire_ctx is going to fail miserably.
Move most of the drm_modeset_acquire_ctx handling (init/drop/fini)
out from intel_{get,release}_load_detect_pipe() into the callers
(intel_{crt,tv}_detect()). Only the actual locking and backoff
handling is left in intel_get_load_detect_pipe(). And in
intel_enable_pipe_a() we just share the mode_config.acquire_ctx from
drm_modeset_lock_all() which is already holding all the relevant locks.
It's perfectly legal to lock the same ww_mutex multiple times using the
same ww_acquire_ctx. drm_modeset_lock() will convert the returned
-EALREADY into 0, so the caller doesn't need to do antyhing special.
Fixes a hang on resume on my 830.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: stable@vger.kernel.org
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Pull drm fixes (mostly nouveau) from Dave Airlie:
"One doc buidling fixes for a file that moved, along with a bunch of
nouveau fixes, one a build problem on ARM"
* 'drm-fixes' of git://people.freedesktop.org/~airlied/linux:
drm/doc: Refer to proper source file
drm/nouveau/platform: fix compilation error
drm/nouveau/gk20a: add LTC device
drm/nouveau: warn if we fail to re-pin fb on resume
drm/nouveau/nvif: fix dac load detect method definition
drm/gf100-/gr: fix -ENOSPC detection when allocating zbc table entries
drm/nouveau/nvif: return null pointers on failure, in addition to ret != 0
drm/nouveau/ltc: fix tag base address getting truncated if above 4GiB
drm/nvc0-/fb/ram: fix use of non-existant ram if partitions aren't uniform
drm/nouveau/bar: behave better if ioremap failed
drm/nouveau/kms: nouveau_fbcon_accel_fini can be static
drm/nouveau: kill unused variable warning if !__OS_HAS_AGP
drm/nouveau/nvif: fix a number of notify thinkos
In the Makefile, radeon_uvd.o is added to radeon-y twice.
As it belongs to the UVD block marked with a comment, the other include
from the block of includes labelled as "KMS driver" is deleted.
Signed-off-by: Andreas Ruprecht <rupran@einserver.de>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Compare the clock in the limits table to the requested evclk rather
than just taking the first value. Improves vce performance in certain
cases.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
Properly set the thermal min and max temp on CI.
Otherwise, we end up setting the thermal ranges
to 0 on resume and end up in the lowest power state.
Signed-off-by: Oleg Chernovskiy <algonkvel@gmail.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
Not doing this causes piglit hangs[0] on my Cape Verde card. No issues on
Bonaire and Kaveri though.
[0] Same symptoms as those fixed on CIK by 'drm/radeon: set VM base addr
using the PFP v2'.
Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
We can easily return -ENOMEM here if kzalloc() fails.
v2: agd5f: drop the vm mutex
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Add a module paramter to enable bapm on APUs. It's disabled
by default on certain APUs due to stability issues. This
option makes it easier to test and to enable it on systems that
are stable.
bug:
https://bugzilla.kernel.org/show_bug.cgi?id=81021
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
A couple of thinkos from the -next merge, some random fixes from a
coverity scan, fix for (at least) GK106 accidentally using
non-existent vram on some board configurations, and better behaviour
of the instmem allocations if vmalloc space runs out.
* 'linux-3.17' of git://anongit.freedesktop.org/git/nouveau/linux-2.6:
drm/nouveau/platform: fix compilation error
drm/nouveau/gk20a: add LTC device
drm/nouveau: warn if we fail to re-pin fb on resume
drm/nouveau/nvif: fix dac load detect method definition
drm/gf100-/gr: fix -ENOSPC detection when allocating zbc table entries
drm/nouveau/nvif: return null pointers on failure, in addition to ret != 0
drm/nouveau/ltc: fix tag base address getting truncated if above 4GiB
drm/nvc0-/fb/ram: fix use of non-existant ram if partitions aren't uniform
drm/nouveau/bar: behave better if ioremap failed
drm/nouveau/kms: nouveau_fbcon_accel_fini can be static
drm/nouveau: kill unused variable warning if !__OS_HAS_AGP
drm/nouveau/nvif: fix a number of notify thinkos
nouveau_platform.c was still using the old nouveau_dev() macro,
triggering a compilation error. Fix this.
Signed-off-by: Alexandre Courbot <acourbot@nvidia.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
LTC device is now required for PGRAPH to work, add it.
Signed-off-by: Alexandre Courbot <acourbot@nvidia.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Reported by Coverity. The intention is that the return value is
checked, but let's be more paranoid and make it extremely obvious
if something forgets to.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
We treat other plane updates in the same fashion. Spotted because
Rodrigo kept reporting a bug in the PSR code where the frontbuffer was
eternally stuck with a dirty cursor bit set.
The psr testcase should have caught this, but that i-g-t is kaputt.
Rodrigo is signed up to fix that.
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Tested-by-and-Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
If we receive a storm of requests for the same context (see gem_storedw_loop_*)
we might end up iterating over too many elements in interrupt time, looking for
contexts to squash together. Instead, share the burden by giving more
intelligence to the queue function. At most, the interrupt will iterate over
three elements.
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
[danvet: Checkpatch.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
In the current Execlists feeding mechanism, full preemption is not
supported yet: only lite-restores are allowed (this is: the GPU
simply samples a new tail pointer for the context currently in
execution).
But we have identified an scenario in which a full preemption occurs:
1) We submit two contexts for execution (A & B).
2) The GPU finishes with the first one (A), switches to the second one
(B) and informs us.
3) We submit B again (hoping to cause a lite restore) together with C,
but in the time we spend writing to the ELSP, the GPU finishes B.
4) The GPU start executing B again (since we told it so).
5) We receive a B finished interrupt and, mistakenly, we submit C (again)
and D, causing a full preemption of B.
The race is avoided by keeping track of how many times a context has been
submitted to the hardware and by better discriminating the received context
switch interrupts: in the example, when we have submitted B twice, we won´t
submit C and D as soon as we receive the notification that B is completed
because we were expecting to get a LITE_RESTORE and we didn´t, so we know a
second completion will be received shortly.
Without this explicit checking, somehow, the batch buffer execution order
gets messed with. This can be verified with the IGT test I sent together with
the series. I don´t know the exact mechanism by which the pre-emption messes
with the execution order but, since other people is working on the Scheduler
+ Preemption on Execlists, I didn´t try to fix it. In these series, only Lite
Restores are supported (other kind of preemptions WARN).
v2: elsp_submitted belongs in the new intel_ctx_submit_request. Several
rebase changes.
v3: Clarify how the race is avoided, as requested by Daniel.
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
[danvet: Align function parameters ...]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Handle all context status events in the context status buffer on every
context switch interrupt. We only remove work from the execlist queue
after a context status buffer reports that it has completed and we only
attempt to schedule new contexts on interrupt when a previously submitted
context completes (unless no contexts are queued, which means the GPU is
free).
We canot call intel_runtime_pm_get() in an interrupt (or with a spinlock
grabbed, FWIW), because it might sleep, which is not a nice thing to do.
Instead, do the runtime_pm get/put together with the create/destroy request,
and handle the forcewake get/put directly.
Signed-off-by: Thomas Daniel <thomas.daniel@intel.com>
v2: Unreferencing the context when we are freeing the request might free
the backing bo, which requires the struct_mutex to be grabbed, so defer
unreferencing and freeing to a bottom half.
v3:
- Ack the interrupt inmediately, before trying to handle it (fix for
missing interrupts by Bob Beckett <robert.beckett@intel.com>).
- Update the Context Status Buffer Read Pointer, just in case (spotted
by Damien Lespiau).
v4: New namespace and multiple rebase changes.
v5: Squash with "drm/i915/bdw: Do not call intel_runtime_pm_get() in an
interrupt", as suggested by Daniel.
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
[danvet: Checkpatch ...]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Context switch (and execlist submission) should happen only when
other contexts are not active, otherwise pre-emption occurs.
To assure this, we place context switch requests in a queue and those
request are later consumed when the right context switch interrupt is
received (still TODO).
v2: Use a spinlock, do not remove the requests on unqueue (wait for
context switch completion).
Signed-off-by: Thomas Daniel <thomas.daniel@intel.com>
v3: Several rebases and code changes. Use unique ID.
v4:
- Move the queue/lock init to the late ring initialization.
- Damien's kmalloc review comments: check return, use sizeof(*req),
do not cast.
v5:
- Do not reuse drm_i915_gem_request. Instead, create our own.
- New namespace.
Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v1)
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com> (v2-v5)
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
[davnet: Checkpatch + wash-up s/BUG_ON/WARN_ON/.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Each logical ring context has the tail pointer in the context object,
so update it before submission.
v2: New namespace.
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
A context switch occurs by submitting a context descriptor to the
ExecList Submission Port. Given that we can now initialize a context,
it's possible to begin implementing the context switch by creating the
descriptor and submitting it to ELSP (actually two, since the ELSP
has two ports).
The context object must be mapped in the GGTT, which means it must exist
in the 0-4GB graphics VA range.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
v2: This code has changed quite a lot in various rebases. Of particular
importance is that now we use the globally unique Submission ID to send
to the hardware. Also, context pages are now pinned unconditionally to
GGTT, so there is no need to bind them.
v3: Use LRCA[31:12] as hwCtxId[19:0]. This guarantees that the HW context
ID we submit to the ELSP is globally unique and != 0 (Bspec requirements
of the software use-only bits of the Context ID in the Context Descriptor
Format) without the hassle of the previous submission Id construction.
Also, re-add the ELSP porting read (it was dropped somewhere during the
rebases).
v4:
- Squash with "drm/i915/bdw: Add forcewake lock around ELSP writes" (BSPEC
says: "SW must set Force Wakeup bit to prevent GT from entering C6 while
ELSP writes are in progress") as noted by Thomas Daniel
(thomas.daniel@intel.com).
- Rename functions and use an execlists/intel_execlists_ namespace.
- The BUG_ON only checked that the LRCA was <32 bits, but it didn't make
sure that it was properly aligned. Spotted by Alistair Mcaulay
<alistair.mcaulay@intel.com>.
v5:
- Improved source code comments as suggested by Chris Wilson.
- No need to abstract submit_ctx away, as pointed by Brad Volkin.
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
[danvet: Checkpatch. Sigh.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
On a previous iteration of this patch, I created an Execlists
version of __i915_add_request and asbtracted it away as a
vfunc. Daniel Vetter wondered then why that was needed:
"with the clean split in command submission I expect every
function to know wether it'll submit to an lrc (everything in
intel_lrc.c) or wether it'll submit to a legacy ring (existing
code), so I don't see a need for an add_request vfunc."
The honest, hairy truth is that this patch is the glue keeping
the whole logical ring puzzle together:
- i915_add_request is used by intel_ring_idle, which in turn is
used by i915_gpu_idle, which in turn is used in several places
inside the eviction and gtt codes.
- Also, it is used by i915_gem_check_olr, which is littered all
over i915_gem.c
- ...
If I were to duplicate all the code that directly or indirectly
uses __i915_add_request, I'll end up creating a separate driver.
To show the differences between the existing legacy version and
the new Execlists one, this time I have special-cased
__i915_add_request instead of adding an add_request vfunc. I
hope this helps to untangle this Gordian knot.
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
[danvet: Adjust to ringbuf->FIXME_lrc_ctx per the discussion with
Thomas Daniel.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Static analysers find it 'suspicious', that we're trying to allocate memory for
elements of size sizeof(struct drm_fb_helper_connector) when the array is
defined as struct drm_fb_helper_connector **.
Use sizeof(struct drm_fb_helper_connector *) instead.
Note that the structure being defined as:
struct drm_fb_helper_connector {
struct drm_connector *connector;
};
This was still doing the right thing, but may not in the future if
additional fields are added.
Cc: Todd Previte <tprevite@gmail.com>
Cc: Dave Airlie <airlied@redhat.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Static analysis will be unhappy if a function can theoretically return
0 and we're trying to divide by that value.
Mark that case that cannot occur as a BUG() instead.
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Bunch of small leftovers spotted by looking at the make htmldocs output.
I've left out dp mst, there's too much amiss there.
v2: Also add the missing parameter docbook in the dp mst code - Dave
Airlie correctly pointed out that we don't actually want kerneldoc for
the missing structure members in header files.
Cc: Dave Airlie <airlied@gmail.com>
Reviewed-by: Matt Roper <matthew.d.roper@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The execlist patches have a bit a convoluted and long history and due
to that have the actual submission still misplaced deeply burried in
the low-level ringbuffer handling code. This design goes back to the
legacy ringbuffer code with its tricky lazy request and simple work
submissiion using ring tail writes. For that reason they need a
ring->ctx backpointer.
The goal is to unburry that code and move it up into a level where the
full execlist context is available so that we can ditch this
backpointer. Until that's done make it really obvious that there's
work still to be done.
Cc: Oscar Mateo <oscar.mateo@intel.com>
Cc: Thomas Daniel <thomas.daniel@intel.com>
Acked-by: Thomas Daniel <thomas.daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The current error state harks back to the era of just a single VM. For
full-ppgtt, we capture every bo on every VM. It behoves us to then print
every bo for every VM, which we currently fail to do and so miss vital
information in the error state.
v2: Use the vma address rather than -1!
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
On VLV, post S0i3 during i915_drm_thaw following issue is observed during ring
initialization.
[ 335.604039] [drm:stop_ring] ERROR render ring :timed out trying to stop ring
[ 336.607340] [drm:stop_ring] ERROR render ring :timed out trying to stop ring
[ 336.607345] [drm:init_ring_common] ERROR failed to set render ring head to zero ctl 00000000 head 00000000 tail 00000000 start 00000000
[ 337.610645] [drm:stop_ring] ERROR bsd ring :timed out trying to stop ring
[ 338.613952] [drm:stop_ring] ERROR bsd ring :timed out trying to stop ring
[ 338.613956] [drm:init_ring_common] ERROR failed to set bsd ring head to zero ctl 00000000 head 00000000 tail 00000000 start 00000000
[ 339.617256] [drm:stop_ring] ERROR render ring :timed out trying to stop ring
[ 339.617258] -----------[ cut here ]-----------
[ 339.617267] WARNING: CPU: 0 PID: 6 at drivers/gpu/drm/i915/intel_ringbuffer.c:1666 intel_cleanup_ring+0xe6/0xf0()
[ 339.617396] --[ end trace 5ef5ed1a3c92e2a6 ]--
[ 339.617428] [drm:__i915_drm_thaw] ERROR failed to re-initialize GPU, declaring wedged!
This is happening since wake is not enabled and Gunit registers are not restored.
For this system suspend/resume paths need to follow save/restore and additional
platform specific setup in suspend_complete and resume_prepare.
suspend_complete is shared unconditionaly for VLV, HSW, BDW. resume_prepare for
HSW and BDW has pc8 disabling which is needed during thaw_early so sharing
uncondtionally. For VLV and SNB runtime resume specific sequence exists.
Cc: Imre Deak <imre.deak@intel.com>
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: Goel, Akash <akash.goel@intel.com>
Signed-off-by: Sagar Kamble <sagar.a.kamble@intel.com>
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>