AT_VECTOR_SIZE_ARCH should be defined with the maximum number of
NEW_AUX_ENT entries that ARCH_DLINFO can contain, but it wasn't defined
for arm64 at all even though ARCH_DLINFO will contain one NEW_AUX_ENT
for the VDSO address.
This shouldn't be a problem as AT_VECTOR_SIZE_BASE includes space for
AT_BASE_PLATFORM which arm64 doesn't use, but lets define it now and add
the comment above ARCH_DLINFO as found in several other architectures to
remind future modifiers of ARCH_DLINFO to keep AT_VECTOR_SIZE_ARCH up to
date.
Fixes: f668cd1673 ("arm64: ELF definitions")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Signed-off-by: Will Deacon <will.deacon@arm.com>
The linker routines that we rely on to produce a relocatable PIE binary
treat it as a shared ELF object in some ways, i.e., it emits symbol based
R_AARCH64_ABS64 relocations into the final binary since doing so would be
appropriate when linking a shared library that is subject to symbol
preemption. (This means that an executable can override certain symbols
that are exported by a shared library it is linked with, and that the
shared library *must* update all its internal references as well, and point
them to the version provided by the executable.)
Symbol preemption does not occur for OS hosted PIE executables, let alone
for vmlinux, and so we would prefer to get rid of these symbol based
relocations. This would allow us to simplify the relocation routines, and
to strip the .dynsym, .dynstr and .hash sections from the binary. (Note
that these are tiny, and are placed in the .init segment, but they clutter
up the vmlinux binary.)
Note that these R_AARCH64_ABS64 relocations are only emitted for absolute
references to symbols defined in the linker script, all other relocatable
quantities are covered by anonymous R_AARCH64_RELATIVE relocations that
simply list the offsets to all 64-bit values in the binary that need to be
fixed up based on the offset between the link time and run time addresses.
Fortunately, GNU ld has a -Bsymbolic option, which is intended for shared
libraries to allow them to ignore symbol preemption, and unconditionally
bind all internal symbol references to its own definitions. So set it for
our PIE binary as well, and get rid of the asoociated sections and the
relocation code that processes them.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[will: fixed conflict with __dynsym_offset linker script entry]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Due to the untyped KIMAGE_VADDR constant, the linker may not notice
that the __rela_offset and __dynsym_offset expressions are absolute
values (i.e., are not subject to relocation). This does not matter for
KASLR, but it does confuse kallsyms in relative mode, since it uses
the lowest non-absolute symbol address as the anchor point, and expects
all other symbol addresses to be within 4 GB of it.
Fix this by qualifying these expressions as ABSOLUTE() explicitly.
Fixes: 0cd3defe0a ("arm64: kernel: perform relocation processing from ID map")
Cc: <stable@vger.kernel.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The workqueue "workq" provides support for sd/mmc async request, which
makes next request do dma_map_sg() while previous request transferring
data.
The workqueue has a single workitem(&host->work) and hence doesn't require
ordering. Also, it is not being used on a memory reclaim path. Hence,
the singlethreaded workqueue has been replaced with the use of system_wq.
System workqueues have been able to handle high level of concurrency
for a long time now and hence it's not required to have a singlethreaded
workqueue just to gain concurrency. Unlike a dedicated per-cpu workqueue
created with create_singlethread_workqueue(), system_wq allows multiple
work items to overlap executions even on the same CPU; however, a
per-cpu workqueue doesn't have any CPU locality or global ordering
guarantee unless the target CPU is explicitly specified and thus the
increase of local concurrency shouldn't make any difference.
Work item has been flushed in rtsx_pci_sdmmc_drv_remove() to ensure that
there are no pending tasks while disconnecting the driver.
Signed-off-by: Bhaktipriya Shridhar <bhaktipriya96@gmail.com>
Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
The rtsx_pci driver is using a fixed 3s timeout for R1B responses, which
in some cases isn't suffient. For example, erase/discard requests may
require longer timeouts.
Instead of always using a fixed timeout, let's use the per request
calculated busy timeout from the mmc core.
Cc: Micky Ching <micky_ching@realsil.com.cn>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Tested-by: Mauro Santos <registo.mailling@gmail.com>
Due to previous changes this define has no longer a purpose. Instead move
the sdhci-pltfm drivers over to use the exported struct sdhci_pltfm_pmops.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Move the system PM callbacks within #ifdef CONFIG_PM_SLEEP as to avoid
them being build when not used. This also allows us to use the
SET_SYSTEM_SLEEP_PM_OPS macro which simplifies the code.
Within this context it also makes sense to move the declaration of the
struct sdhci_pltfm_pmops, outside the #ifdef CONFIG_PM as the
SET_SYSTEM_SLEEP_PM_OPS deals with this. This further simplifies the code.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
To prepare to make the sdhci_pltfm_suspend|resume() static functions, move
sdhci-esdhc-imx over to use the sdhci_suspend|resume_host().
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Dong Aisheng <aisheng.dong@nxp.com>
The system PM callbacks isn't used unless CONFIG_PM_SLEEP is set, thus it
triggers a compiler warning about unused functions. Avoid this by changing
from CONFIG_PM to CONFIG_PM_SLEEP.
Reported-by: Arnd Bergmann <arnd@arndb.de>
Fixes: b70d0b3b5b29 ("mmc: sdhci-esdhc-imx: add esdhc specific suspend resume callback")
Cc: Dong Aisheng <aisheng.dong@nxp.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Dong Aisheng <aisheng.dong@nxp.com>
The MIPS Coherence Manager (CM) can propagate address-based ("hit")
cache operations to other cores in the coherent system, alleviating
software of the need to use SMP calls, however indexed cache operations
are not propagated by hardware since doing so makes no sense for
separate caches.
Update r4k_op_needs_ipi() to report that only hit cache operations are
globalized by the CM, requiring indexed cache operations to be
globalized by software via an SMP call.
r4k_on_each_cpu() previously had a special case for CONFIG_MIPS_MT_SMP,
intended to avoid the SMP calls when the only other CPUs in the system
were other VPEs in the same core, and hence sharing the same caches.
This was changed by commit cccf34e941 ("MIPS: c-r4k: Fix cache
flushing for MT cores") to apparently handle multi-core multi-VPE
systems, but it focussed mainly on hit cache ops, so the SMP calls were
still disabled entirely for CM systems.
This doesn't normally cause problems, but tests can be written to hit
these corner cases by using multiple threads, or changing task
affinities to force the process to migrate cores. For example the
failure of mprotect RW->RX to globally sync icaches (via
flush_cache_range) can be detected by modifying and mprotecting a code
page on one core, and migrating to a different core to execute from it.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13807/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Avoid SMP calls for flushing small icache ranges. On non-CM platforms,
and CM platforms too after we make r4k_on_each_cpu() take the cache op
type into account, it will be called on multiple CPUs due to the
possibility that local_r4k_flush_icache_range_ipi() could do
non-globalized indexed cache ops. This rougly copies the range size
check out into r4k_flush_icache_range(), which can disallow indexed
cache ops and allow r4k_on_each_cpu() to skip the SMP call.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13805/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Allow the permitted cache op types used by
local_r4k_flush_icache_range_ipi() to be overridden by the SMP caller.
This will allow SMP calls to be avoided under certain circumstances,
falling back to a single CPU performing globalized hit cache ops only.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13803/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Split the operation of r4k_flush_kernel_vmap_range() into separate
SMP callbacks for the indexed cache flush and hit cache flush cases,
since the logic to determine which to use can be determined by the
initiating CPU prior to doing any SMP calls.
This will help when we change r4k_on_each_cpu() to distinguish indexed
and hit cache ops in a later patch, preventing globalized hit cache ops
being performed redundantly on multiple CPUs.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13806/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
When performing SMP calls to foreign cores, exclude sibling CPUs from
the provided map, as we already handle the local core on the current
CPU. This prevents an SMP call from for example core 0, VPE 1 to VPE 0
on the same core.
In the process the cpu_foreign_map cpumask is turned into an array of
cpumasks, so that each CPU has its own version of it which excludes
sibling CPUs. r4k_op_needs_ipi() is also updated to reflect that cache
management SMP calls are not needed when all CPUs are siblings (i.e.
there are no foreign CPUs according to the new cpu_foreign_map[]
semantics which exclude siblings).
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: Felix Fietkau <nbd@nbd.name>
Cc: Jayachandran C. <jchandra@broadcom.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13801/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Several cache operations are optimised to return early from the SMP call
handler if the memory map in question has no valid ASID on the current
CPU, or any online CPU in the case of MIPS_MT_SMP. The idea is that if a
memory map has never been used on a CPU it shouldn't have cache lines in
need of flushing.
However this doesn't cover all cases when ASIDs for other CPUs need to
be checked:
- Offline VPEs may have recently been online and brought lines into the
(shared) cache, so they should also be checked, rather than only
online CPUs.
- SMP systems with a Coherence Manager (CM), but with MT disabled still
have globalized hit cache ops, but don't use SMP calls, so all present
CPUs should be taken into account.
- R6 systems have a different multithreading implementation, so
MIPS_MT_SMP won't be set, but as above may still have a CM which
globalizes hit cache ops.
Additionally for non-globalized cache operations where an SMP call to a
single VPE in each foreign core is used, it is not necessary to check
every CPU in the system, only sibling CPUs sharing the same first level
cache.
Fix this by making has_valid_asid() take a cache op type argument like
r4k_on_each_cpu(), so it can determine whether r4k_on_each_cpu() will
have done SMP calls to other cores. It can then determine which set of
CPUs to check the ASIDs of based on that, excluding foreign CPUs if an
SMP call will have been performed.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13804/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The r4k_on_each_cpu() function calls the specified cache flush helper on
other CPUs if deemed necessary due to the cache ops not being
globalized by hardware. However this really depends on the cache op
addressing type, as the MIPS Coherence Manager (CM) if present will
globalize "hit" cache ops (addressed by virtual address), but not
"index" cache ops (addressed by cache index). This results in index
cache ops only being performed on a single CPU when CM is present.
Most (but not all) of the functions called by r4k_on_each_cpu() perform
cache operations exclusively with a single cache op type, so add a type
argument and modify the callers to pass in some combination of R4K_HIT
(global kernel virtual addressing or user virtual addressing
conditional upon matching active_mm) and R4K_INDEX (index into cache).
This will allow r4k_on_each_cpu() to later distinguish these cases and
decide whether to perform an SMP call based on it.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13798/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Avoid the dcache and scache flush in local_r4k_flush_cache_sigtramp() if
the icache fills straight from the dcache.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13802/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Fix r4k_flush_cache_sigtramp() and local_r4k_flush_cache_sigtramp() to
flush the delay slot emulation trampoline cacheline through a kmap
rather than directly when the active_mm doesn't match that of the task
initiating the flush, a bit like local_r4k_flush_cache_page() does.
This would fix a corner case on SMP systems without hardware globalized
hit cache ops, where a migration to another CPU after the flush, where
that CPU did not have the same mm active at the time of the flush, could
result in stale icache content being executed instead of the trampoline,
e.g. from a previous delay slot emulation with a similar stack pointer.
This case was artificially triggered by replacing the icache flush with
a full indexed flush (not globalized on CM systems) and forcing the SMP
call to take place, with a test program that alternated two FPU delay
slots with a parent process repeatedly changing scheduler affinity.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13797/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The protected_writeback_scache_line() function is used by
local_r4k_flush_cache_sigtramp() to flush an FPU delay slot emulation
trampoline on the userland stack from the caches so it is visible to
subsequent instruction fetches.
Commit de8974e3f7 ("MIPS: asm: r4kcache: Add EVA cache flushing
functions") updated some protected_ cache flush functions to use EVA
CACHEE instructions via protected_cachee_op(), and commit 83fd43449b
("MIPS: r4kcache: Add EVA case for protected_writeback_dcache_line") did
the same thing for protected_writeback_dcache_line(), but
protected_writeback_scache_line() never got updated. Lets fix that now
to flush the right user address from the secondary cache rather than
some arbitrary kernel unmapped address.
This issue was spotted through code inspection, and it seems unlikely to
be possible to hit this in practice. It theoretically affect EVA kernels
on EVA capable cores with an L2 cache, where the icache fetches straight
from RAM (cpu_icache_snoops_remote_store == 0), running a hard float
userland with FPU disabled (nofpu). That both Malta and Boston platforms
override cpu_icache_snoops_remote_store to 1 suggests that all MIPS
cores fetch instructions into icache straight from L2 rather than RAM.
Fixes: de8974e3f7 ("MIPS: asm: r4kcache: Add EVA cache flushing functions")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13800/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Commit cccf34e941 ("MIPS: c-r4k: Fix cache flushing for MT cores")
added the cpu_foreign_map cpumask containing a single VPE from each
online core, and recalculated it when secondary CPUs are brought up.
stop_this_cpu() was also updated to recalculate cpu_foreign_map, but
with an additional hack before marking the CPU as offline to copy
cpu_online_mask into cpu_foreign_map and perform an SMP memory barrier.
This appears to have been intended to prevent cache management IPIs
being missed when the VPE representing the core in cpu_foreign_map is
taken offline while other VPEs remain online. Unfortunately there is
nothing in this hack to prevent r4k_on_each_cpu() from reading the old
cpu_foreign_map, and smp_call_function_many() from reading that new
cpu_online_mask with the core's representative VPE marked offline. It
then wouldn't send an IPI to any online VPEs of that core.
stop_this_cpu() is only actually called in panic and system shutdown /
halt / reboot situations, in which case all CPUs are going down and we
don't really need to care about cache management, so drop this hack.
Note that the __cpu_disable() case for CPU hotplug is handled in the
previous commit, and no synchronisation is needed there due to the use
of stop_machine() which prevents hotplug from taking place while any CPU
has disabled preemption (as r4k_on_each_cpu() does).
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13796/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
When a CPU is disabled via CPU hotplug, cpu_foreign_map is not updated.
This could result in cache management SMP calls being sent to offline
CPUs instead of online siblings in the same core.
Add a call to calculate_cpu_foreign_map() in the various MIPS cpu
disable callbacks after set_cpu_online(). All cases are updated for
consistency and to keep cpu_foreign_map strictly up to date, not just
those which may support hardware multithreading.
Fixes: cccf34e941 ("MIPS: c-r4k: Fix cache flushing for MT cores")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: David Daney <david.daney@cavium.com>
Cc: Kevin Cernekee <cernekee@gmail.com>
Cc: Florian Fainelli <f.fainelli@gmail.com>
Cc: Huacai Chen <chenhc@lemote.com>
Cc: Hongliang Tao <taohl@lemote.com>
Cc: Hua Yan <yanh@lemote.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13799/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The SMP flush_tlb_*() functions may clear the memory map's ASIDs for
other CPUs if the mm has only a single user (the current CPU) in order
to avoid SMP calls. However this makes it appear to has_valid_asid(),
which is used by various cache flush functions, as if the CPUs have
never run in the mm, and therefore can't have cached any of its memory.
For flush_tlb_mm() this doesn't sound unreasonable.
flush_tlb_range() corresponds to flush_cache_range() which does do full
indexed cache flushes, but only on the icache if the specified mapping
is executable, otherwise it doesn't guarantee that there are no cache
contents left for the mm.
flush_tlb_page() corresponds to flush_cache_page(), which will perform
address based cache ops on the specified page only, and also only
touches the icache if the page is executable. It does not guarantee that
there are no cache contents left for the mm.
For example, this affects flush_cache_range() which uses the
has_valid_asid() optimisation. It is required to flush the icache when
mappings are made executable (e.g. using mprotect) so they are
immediately usable. If some code is changed to non executable in order
to be modified then it will not be flushed from the icache during that
time, but the ASID on other CPUs may still be cleared for TLB flushing.
When the code is changed back to executable, flush_cache_range() will
assume the code hasn't run on those other CPUs due to the zero ASID, and
won't invalidate the icache on them.
This is fixed by clearing the other CPUs ASIDs to 1 instead of 0 for the
above two flush_tlb_*() functions when the corresponding cache flushes
are likely to be incomplete (non executable range flush, or any page
flush). This ASID appears valid to has_valid_asid(), but still triggers
ASID regeneration due to the upper ASID version bits being 0, which is
less than the minimum ASID version of 1 and so always treated as stale.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13795/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
version 6:
rebased patch on top rcar-du changes for zpos
version 4:
fix null pointer issue while setting zpos in plane reset function
This patch replaces zpos property handling custom code in rcar DRM
driver with calls to generic DRM code.
Signed-off-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Ville Syrjala <ville.syrjala@linux.intel.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
remove private zpos property and use instead the generic new.
zpos range is now fixed per plane type and normalized before
being using in mixer.
Signed-off-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Cc: Inki Dae <inki.dae@samsung.com>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Ville Syrjala <ville.syrjala@linux.intel.com>
Cc: Joonyoung Shim <jy0922.shim@samsung.com>
Cc: Seung-Woo Kim <sw0312.kim@samsung.com>
Cc: Andrzej Hajda <a.hajda@samsung.com>
Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Cc: Tobias Jakobi <tjakobi@math.uni-bielefeld.de>
Cc: Gustavo Padovan <gustavo@padovan.org>
Cc: vincent.abriou@st.com
Cc: fabien.dessenne@st.com
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
version 8:
- move drm_blend.o from drm-y to drm_kms_helper-y to avoid
EXPORT_SYMBOL(drm_atomic_helper_normalize_zpos)
- remove dead function declarations in drm_crtc.h
version 7:
- remove useless EXPORT_SYMBOL()
- better z-order wording in Documentation
version 6:
- add zpos in gpu documentation file
- merge Ville patch about zpos initial value and API improvement.
I have split Ville patch between zpos core and drivers
version 5:
- remove zpos range check and comeback to 0 to N-1
normalization algorithm
version 4:
- make sure that normalized zpos value is stay
in the defined property range and warn user if not
This patch adds support for generic plane's zpos property property with
well-defined semantics:
- added zpos properties to plane and plane state structures
- added helpers for normalizing zpos properties of given set of planes
- well defined semantics: planes are sorted by zpos values and then plane
id value if zpos equals
Normalized zpos values are calculated automatically when generic
muttable zpos property has been initialized. Drivers can simply use
plane_state->normalized_zpos in their atomic_check and/or plane_update
callbacks without any additional calls to DRM core.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Compare to Marek's original patch zpos property is now specific to each
plane and no more to the core.
Normalize function take care of the range of per plane defined range
before set normalized_zpos.
Signed-off-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: Inki Dae <inki.dae@samsung.com>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Ville Syrjala <ville.syrjala@linux.intel.com>
Cc: Joonyoung Shim <jy0922.shim@samsung.com>
Cc: Seung-Woo Kim <sw0312.kim@samsung.com>
Cc: Andrzej Hajda <a.hajda@samsung.com>
Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Cc: Tobias Jakobi <tjakobi@math.uni-bielefeld.de>
Cc: Gustavo Padovan <gustavo@padovan.org>
Cc: vincent.abriou@st.com
Cc: fabien.dessenne@st.com
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
do_sparc64_fault() calculates both the base and huge page RSS sizes and
uses this information in calls to tsb_grow(). The calculation for base
page TSB size is not correct if the task uses hugetlb pages. hugetlb
pages are not accounted for in RSS, therefore the call to get_mm_rss(mm)
does not include hugetlb pages. However, the number of pages based on
huge_pte_count (which does include hugetlb pages) is subtracted from
this value. This will result in an artificially small and often negative
RSS calculation. The base TSB size is then often set to max_tsb_size
as the passed RSS is unsigned, so a negative value looks really big.
THP pages are also accounted for in huge_pte_count, and THP pages are
accounted for in RSS so the calculation in do_sparc64_fault() is correct
if a task only uses THP pages.
A single huge_pte_count is not sufficient for TSB sizing if both hugetlb
and THP pages can be used. Instead of a single counter, use two: one
for hugetlb and one for THP.
Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Restore the correct behaviour (as in check msg.reply) when aux
->transfer() returns 0. It got removed in
commit 82922da391 ("drm/dp_helper: Retry aux transactions on all errors")
Now I can actually dump the "entire" DPCD on a Dell UP2314Q with
ddrescue. It has some offsets in the DPCD that can't be read
for some resaon, all you get is defers. Previously ddrescue would
just give up at the first unredable offset on account of
read() returning 0 means EOF. Here's the ddrescue log
for the interested:
0x00000000 0x00001400 +
0x00001400 0x00000030 -
0x00001430 0x000001D0 +
0x00001600 0x00000030 -
0x00001630 0x0001F9D0 +
0x00021000 0x00000001 -
0x00021001 0x000DEFFF +
Cc: Lyude <cpaul@redhat.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch
Cc: stable@vger.kernel.org
Fixes: 82922da391 ("drm/dp_helper: Retry aux transactions on all errors")
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
changes are:
. The function pid code uses the event pid filtering logic
. [ku]probe events have access to current->comm
. trace_printk now has sample code
. PCI devices now trace physical addresses
. stack tracing has less unnessary functions traced
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJXl+d2AAoJEKKk/i67LK/83QEH/RDJ0mcfFVsuEeOnZZrZXABm
4Rxk4FE5UAD+TSrVycwwzcbQab1iPK63mMdYvIBvaOiIC6/OJaEVM7jzZxnNGqmr
pj0H8bxwOr58pe5pfnP92ow5qTLLzsXraWNl5sRXhSSHON7CXpGVzkErB58GmMYd
8p6d9ziifQjo8X2O6XC9rGAvYLY5kEkVvyfuE1hI7muNTeOjyOT4EqpkNzxdBk+I
QkGZGsk3Xhc8II9nu8FPWkaD26TatGJoZtZmVWHOzfsb3HNzG4RXla+WVOQ5u1HV
noVyB1CJHhkO5CEBPdYIqwBWPQU4B9HfG4gVcUpDDVRxfzMpnEcKi1uwe+uDjfs=
=XFcv
-----END PGP SIGNATURE-----
Merge tag 'trace-v4.8' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace
Pull tracing updates from Steven Rostedt:
"This is mostly clean ups and small fixes. Some of the more visible
changes are:
- The function pid code uses the event pid filtering logic
- [ku]probe events have access to current->comm
- trace_printk now has sample code
- PCI devices now trace physical addresses
- stack tracing has less unnessary functions traced"
* tag 'trace-v4.8' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
printk, tracing: Avoiding unneeded blank lines
tracing: Use __get_str() when manipulating strings
tracing, RAS: Cleanup on __get_str() usage
tracing: Use outer () on __get_str() definition
ftrace: Reduce size of function graph entries
tracing: Have HIST_TRIGGERS select TRACING
tracing: Using for_each_set_bit() to simplify trace_pid_write()
ftrace: Move toplevel init out of ftrace_init_tracefs()
tracing/function_graph: Fix filters for function_graph threshold
tracing: Skip more functions when doing stack tracing of events
tracing: Expose CPU physical addresses (resource values) for PCI devices
tracing: Show the preempt count of when the event was called
tracing: Add trace_printk sample code
tracing: Choose static tp_printk buffer by explicit nesting count
tracing: expose current->comm to [ku]probe events
ftrace: Have set_ftrace_pid use the bitmap like events do
tracing: Move pid_list write processing into its own function
tracing: Move the pid_list seq_file functions to be global
tracing: Move filtered_pid helper functions into trace.c
tracing: Make the pid filtering helper functions global
- Enable no-iommu mode for platform devices (Peng Fan)
- Sub-page mmap for exclusive pages (Yongji Xie)
- Use-after-free fix (Ilya Lesokhin)
- Support for ACPI-based platform devices (Sinan Kaya)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.14 (GNU/Linux)
iQIcBAABAgAGBQJXmkaBAAoJECObm247sIsiM00P/3GIrGt6XKV5CwTstU4pOWc3
1LkD9FFQUe5o+iyB4TETTp7SADnseeYL1KVYsP+VOU6YmfVbJpX/GhDs+pEOAl5d
kiHcFOX1GZybWNToizAlmniaEcyXm4VzBEtgM1XCSjFmI8DOH0mJaTAxfZ5wQZcu
QB62EvoGDs2DihbQwlDTxpOyD+p0s2c8OvJCGOFodmL6RHFtTuotEsW3R4ZgZ+u7
UrgZnRFjt1D34Drf4rvfILuakMPZ8OaaElYJNjVRBeRgSKMz6nxnQNuuJDehk+0B
R1FT5ATg9f+1J1eqJRTedTjVLwSNM7yNuBwj/2gIjNm5Ic3M8r3VFbeH9RWb2Say
DCaPlBpWNsc93xa2Mur/YH8ymLLPu5dFb8bnFDw6v+gZ+8VuGL4wJGS1sIJNipdK
TfxZf+FWPOmTfxYNQ2ZJUiYZ8Z1GeYgFZxJ06cSnlZJed2OaacYauxqw291jLJCA
0cqqayfSErdq1ZgqllFNe/BlhHF7Wc6X8RSGmrgbQCEOXUN4Ffb0QoH1EqDAntr/
o+ToPRoxKuGDp1+YTI6t/4onjpHYDa5158lu4stOyhGvmxRiopDr9tuYAzeidxFt
DUSHlk1qma0t8E1PSt2+aBLjZTNH7VkpR7Qbp+IenrS9wahK3SAs7zQIF/zjoXty
cgYvk/IaSOJX5mw1EZhb
=4OaT
-----END PGP SIGNATURE-----
Merge tag 'vfio-v4.8-rc1' of git://github.com/awilliam/linux-vfio
Pull VFIO updates from Alex Williamson:
- Enable no-iommu mode for platform devices (Peng Fan)
- Sub-page mmap for exclusive pages (Yongji Xie)
- Use-after-free fix (Ilya Lesokhin)
- Support for ACPI-based platform devices (Sinan Kaya)
* tag 'vfio-v4.8-rc1' of git://github.com/awilliam/linux-vfio:
vfio: platform: check reset call return code during release
vfio: platform: check reset call return code during open
vfio, platform: make reset driver a requirement by default
vfio: platform: call _RST method when using ACPI
vfio: platform: add extra debug info argument to call reset
vfio: platform: add support for ACPI probe
vfio: platform: determine reset capability
vfio: platform: move reset call to a common function
vfio: platform: rename reset function
vfio: fix possible use after free of vfio group
vfio-pci: Allow to mmap sub-page MMIO BARs if the mmio page is exclusive
vfio: platform: support No-IOMMU mode
Pull MD updates from Shaohua Li:
- A bunch of patches from Neil Brown to fix RCU usage
- Two performance improvement patches from Tomasz Majchrzak
- Alexey Obitotskiy fixes module refcount issue
- Arnd Bergmann fixes time granularity
- Cong Wang fixes a list corruption issue
- Guoqing Jiang fixes a deadlock in md-cluster
- A null pointer deference fix from me
- Song Liu fixes misuse of raid6 rmw
- Other trival/cleanup fixes from Guoqing Jiang and Xiao Ni
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/shli/md: (28 commits)
MD: fix null pointer deference
raid10: improve random reads performance
md: add missing sysfs_notify on array_state update
Fix kernel module refcount handling
md: use seconds granularity for error logging
md: reduce the number of synchronize_rcu() calls when multiple devices fail.
md: be extra careful not to take a reference to a Faulty device.
md/multipath: add rcu protection to rdev access in multipath_status.
md/raid5: add rcu protection to rdev accesses in raid5_status.
md/raid5: add rcu protection to rdev accesses in want_replace
md/raid5: add rcu protection to rdev accesses in handle_failed_sync.
md/raid1: add rcu protection to rdev in fix_read_error
md/raid1: small code cleanup in end_sync_write
md/raid1: small cleanup in raid1_end_read/write_request
md/raid10: simplify print_conf a little.
md/raid10: minor code improvement in fix_read_error()
md/raid10: add rcu protection to rdev access during reshape.
md/raid10: add rcu protection to rdev access in raid10_sync_request.
md/raid10: add rcu protection in raid10_status.
md/raid10: fix refounct imbalance when resyncing an array with a replacement device.
...
1/ Replace pcommit with ADR / directed-flushing:
The pcommit instruction, which has not shipped on any product, is
deprecated. Instead, the requirement is that platforms implement either
ADR, or provide one or more flush addresses per nvdimm. ADR
(Asynchronous DRAM Refresh) flushes data in posted write buffers to the
memory controller on a power-fail event. Flush addresses are defined in
ACPI 6.x as an NVDIMM Firmware Interface Table (NFIT) sub-structure:
"Flush Hint Address Structure". A flush hint is an mmio address that
when written and fenced assures that all previous posted writes
targeting a given dimm have been flushed to media.
2/ On-demand ARS (address range scrub):
Linux uses the results of the ACPI ARS commands to track bad blocks
in pmem devices. When latent errors are detected we re-scrub the media
to refresh the bad block list, userspace can also request a re-scrub at
any time.
3/ Support for the Microsoft DSM (device specific method) command format.
4/ Support for EDK2/OVMF virtual disk device memory ranges.
5/ Various fixes and cleanups across the subsystem.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJXmXBsAAoJEB7SkWpmfYgCEwwP/1IOt9ocP+iHLMDH9KE7VaTZ
NmUDR+Zy6g5cRQM7SgcuU5BXUcx+OsSrSrUTVF1cW994o9Gbz1mFotkv0ZAsPcYY
ZVRQxo2oqHrssyOcg+PsgKWiXn68rJOCgmpEyzaJywl5qTMst7pzsT1s1f7rSh6h
trCf4VaJJwxZR8fARGtlHUnnhPe2Orp99EZRKEWprAsIv2kPuWpPHSjRjuEgN1JG
KW8AYwWqFTtiLRUk86I4KBB0wcDrfctsjgN9Ogd6+aHyQBRnVSr2U+vDCFkC8KLu
qiDCpYp+yyxBjclnljz7tRRT3GtzfCUWd4v2KVWqgg2IaobUc0Lbukp/rmikUXQP
WLikT2OCQ994eFK5OX3Q3cIU/4j459TQnof8q14yVSpjAKrNUXVSR5puN7Hxa+V7
41wKrAsnsyY1oq+Yd/rMR8VfH7PHx3bFkrmRCGZCufLX1UQm4aYj+sWagDKiV3yA
DiudghbOnhfurfGsnXUVw7y7GKs+gNWNBmB6ndAD6ZEHmKoGUhAEbJDLCc3DnANl
b/2mv1MIdIcC1DlCmnbbcn6fv6bICe/r8poK3VrCK3UgOq/EOvKIWl7giP+k1JuC
6DdVYhlNYIVFXUNSLFAwz8OkLu8byx7WDm36iEqrKHtPw+8qa/2bWVgOU6OBgpjV
cN3edFVIdxvZeMgM5Ubq
=xCBG
-----END PGP SIGNATURE-----
Merge tag 'libnvdimm-for-4.8' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm
Pull libnvdimm updates from Dan Williams:
- Replace pcommit with ADR / directed-flushing.
The pcommit instruction, which has not shipped on any product, is
deprecated. Instead, the requirement is that platforms implement
either ADR, or provide one or more flush addresses per nvdimm.
ADR (Asynchronous DRAM Refresh) flushes data in posted write buffers
to the memory controller on a power-fail event.
Flush addresses are defined in ACPI 6.x as an NVDIMM Firmware
Interface Table (NFIT) sub-structure: "Flush Hint Address Structure".
A flush hint is an mmio address that when written and fenced assures
that all previous posted writes targeting a given dimm have been
flushed to media.
- On-demand ARS (address range scrub).
Linux uses the results of the ACPI ARS commands to track bad blocks
in pmem devices. When latent errors are detected we re-scrub the
media to refresh the bad block list, userspace can also request a
re-scrub at any time.
- Support for the Microsoft DSM (device specific method) command
format.
- Support for EDK2/OVMF virtual disk device memory ranges.
- Various fixes and cleanups across the subsystem.
* tag 'libnvdimm-for-4.8' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm: (41 commits)
libnvdimm-btt: Delete an unnecessary check before the function call "__nd_device_register"
nfit: do an ARS scrub on hitting a latent media error
nfit: move to nfit/ sub-directory
nfit, libnvdimm: allow an ARS scrub to be triggered on demand
libnvdimm: register nvdimm_bus devices with an nd_bus driver
pmem: clarify a debug print in pmem_clear_poison
x86/insn: remove pcommit
Revert "KVM: x86: add pcommit support"
nfit, tools/testing/nvdimm/: unify shutdown paths
libnvdimm: move ->module to struct nvdimm_bus_descriptor
nfit: cleanup acpi_nfit_init calling convention
nfit: fix _FIT evaluation memory leak + use after free
tools/testing/nvdimm: add manufacturing_{date|location} dimm properties
tools/testing/nvdimm: add virtual ramdisk range
acpi, nfit: treat virtual ramdisk SPA as pmem region
pmem: kill __pmem address space
pmem: kill wmb_pmem()
libnvdimm, pmem: use nvdimm_flush() for namespace I/O writes
fs/dax: remove wmb_pmem()
libnvdimm, pmem: flush posted-write queues on shutdown
...
New drivers:
- New driver for Oxnas pin control and GPIO. This ARM-based chipset
is used in a few storage (NAS) type devices.
- New driver for the MAX77620/MAX20024 pin controller portions.
- New driver for the Intel Merrifield pin controller.
New subdrivers:
- New subdriver for the Qualcomm MDM9615
- New subdriver for the STM32F746 MCU
- New subdriver for the Broadcom NSP SoC.
Cleanups:
- Demodularization of bool compiled-in drivers.
Apart from this there is just regular incremental improvements to
a lot of drivers, especially Uniphier and PFC.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJXmcthAAoJEEEQszewGV1z2ukP/1T34tgMllEzBcnyTM28pnPl
80anOiCi/jkLuW1hYTLc4Rx0tZT2oWw9hrZdKGNMuzNCaJSmFaRhUrbzxhZ8E+6O
3AHYSopAYUTKVYJsuY+fs3HbNajKBsSWYdmxin4e953BPudLjhZ2WliDXxupsbwZ
/KI6s8J2pDZcPurrlozT5Avp3BTwwCq+fo12NIMkkmuhURb69OsDAVjPlwjocq73
BKcdH8AdgO7w5Ss5/IQbvrhyuFc2kCQ/wH1tiuE2a4iYWhp+QkMOEqWUSdYs33bx
Sbn3KTK6IYYS1eb4xharh7H/zoBs20aCQ2kS8qbYmG+Fv7rB7qboI0qM7m7+25O8
7F6Tf4F0HUg6IjcABcI5lFuTxACBG8p5ZlmQAp/36EaeIblLALBCvd1ArmZ6fdG+
Pzu5vLaZBAmxQn6EseHLAJkH4FEzV7II/Sk7U23TINHUpl/L2GJO+6irz7eelKAk
XED6mrNU8rRnZMka02ZnIgIbABG7ELNJsRxEnf8k9CX7cfi4p568eZeR1nfKcy4+
uldJLipNv3NfwuRY5JEEa7pFW4azYmnzS1GcoVYPy7snYc4Rr8cbBD6YvcxyMhVz
RXHc21mj45JnboldAYU59t5BbVZNwZqF8hmg935ngoaZjYfhhnfGoNQF7hy6/1fS
bwojoQtBqGcriKZYGs0o
=po0g
-----END PGP SIGNATURE-----
Merge tag 'pinctrl-v4.8-1' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl
Pull pin control updates from Linus Walleij:
"This is the bulk of pin control changes for the v4.8 kernel cycle.
Nothing stands out as especially exiting: new drivers, new subdrivers,
lots of cleanups and incremental features.
Business as usual.
New drivers:
- New driver for Oxnas pin control and GPIO. This ARM-based chipset
is used in a few storage (NAS) type devices.
- New driver for the MAX77620/MAX20024 pin controller portions.
- New driver for the Intel Merrifield pin controller.
New subdrivers:
- New subdriver for the Qualcomm MDM9615
- New subdriver for the STM32F746 MCU
- New subdriver for the Broadcom NSP SoC.
Cleanups:
- Demodularization of bool compiled-in drivers.
Apart from this there is just regular incremental improvements to a
lot of drivers, especially Uniphier and PFC"
* tag 'pinctrl-v4.8-1' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl: (131 commits)
pinctrl: fix pincontrol definition for marvell
pinctrl: xway: fix typo
Revert "pinctrl: amd: make it explicitly non-modular"
pinctrl: iproc: Add NSP and Stingray GPIO support
pinctrl: Update iProc GPIO DT bindings
pinctrl: bcm: add OF dependencies
pinctrl: ns2: remove redundant dev_err call in ns2_pinmux_probe()
pinctrl: Add STM32F746 MCU support
pinctrl: intel: Protect set wake flow by spin lock
pinctrl: nsp: remove redundant dev_err call in nsp_pinmux_probe()
pinctrl: uniphier: add Ethernet pin-mux settings
sh-pfc: Use PTR_ERR_OR_ZERO() to simplify the code
pinctrl: ns2: fix return value check in ns2_pinmux_probe()
pinctrl: qcom: update DT bindings with ebi2 groups
pinctrl: qcom: establish proper EBI2 pin groups
pinctrl: imx21: Remove the MODULE_DEVICE_TABLE() macro
Documentation: dt: Add new compatible to STM32 pinctrl driver bindings
includes: dt-bindings: Add STM32F746 pinctrl DT bindings
pinctrl: sunxi: fix nand0 function name for sun8i
pinctrl: uniphier: remove pointless pin-mux settings for PH1-LD11
...
Merge more updates from Andrew Morton:
"The rest of MM"
* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (101 commits)
mm, compaction: simplify contended compaction handling
mm, compaction: introduce direct compaction priority
mm, thp: remove __GFP_NORETRY from khugepaged and madvised allocations
mm, page_alloc: make THP-specific decisions more generic
mm, page_alloc: restructure direct compaction handling in slowpath
mm, page_alloc: don't retry initial attempt in slowpath
mm, page_alloc: set alloc_flags only once in slowpath
lib/stackdepot.c: use __GFP_NOWARN for stack allocations
mm, kasan: switch SLUB to stackdepot, enable memory quarantine for SLUB
mm, kasan: account for object redzone in SLUB's nearest_obj()
mm: fix use-after-free if memory allocation failed in vma_adjust()
zsmalloc: Delete an unnecessary check before the function call "iput"
mm/memblock.c: fix index adjustment error in __next_mem_range_rev()
mem-hotplug: alloc new page from a nearest neighbor node when mem-offline
mm: optimize copy_page_to/from_iter_iovec
mm: add cond_resched() to generic_swapfile_activate()
Revert "mm, mempool: only set __GFP_NOMEMALLOC if there are free elements"
mm, compaction: don't isolate PageWriteback pages in MIGRATE_SYNC_LIGHT mode
mm: hwpoison: remove incorrect comments
make __section_nr() more efficient
...
The unused bytes of the features array should be zeroed, but the start index was one
byte too early. This caused the device features byte to be overwritten by 0.
The compliance test for the CEC_S_LOG_ADDRS ioctl didn't catch this because it tested
byte continuation with the second device features byte being 0 :-(
Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
staging media drivers tend to have a build time dependency on the
media support. In particular, the newly added pulse8 cec driver can
only be a loadable module if MEDIA_SUPPORT=m, but its build dependency
is on a 'bool' symbol (MEDIA_CEC), so a randconfig build can fail
with pulse8_cec built-in:
drivers/staging/built-in.o: In function `pulse8_disconnect':
dgnc_utils.c:(.text+0x114): undefined reference to `cec_unregister_adapter'
drivers/staging/built-in.o: In function `pulse8_irq_work_handler':
dgnc_utils.c:(.text+0x1bc): undefined reference to `cec_transmit_done'
dgnc_utils.c:(.text+0x1d8): undefined reference to `cec_received_msg'
dgnc_utils.c:(.text+0x1f4): undefined reference to `cec_transmit_done'
dgnc_utils.c:(.text+0x218): undefined reference to `cec_transmit_done'
dgnc_utils.c:(.text+0x23c): undefined reference to `cec_transmit_done'
drivers/staging/built-in.o: In function `pulse8_connect':
dgnc_utils.c:(.text+0x844): undefined reference to `cec_allocate_adapter'
dgnc_utils.c:(.text+0x8a4): undefined reference to `cec_delete_adapter'
dgnc_utils.c:(.text+0xa10): undefined reference to `cec_register_adapter'
Originally, MEDIA_CEC itself was a tristate symbol, which would have
prevented this, but since 5bb2399a4f ("[media] cec: fix Kconfig
dependency problems"), it doesn't work like that any more.
This encloses all of the staging media drivers in a CONFIG_MEDIA_SUPPORT
dependency in Kconfig, which solves the problem by enforcing that none
of the drivers can be built-in if the media core is a module.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
vivid shouldn't process the CEC_MSG_SET_STREAM_PATH message: this will confuse
userspace follower code because it isn't aware of the state change of becoming
an active source.
Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Access to the interrupt page registers has been broken since at least
commit 3999e5d01d ("[media] adv7180: Do implicit register paging").
That commit forgot to add the interrupt page number to the register
defines.
Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
Tested-by: Tim Harvey <tharvey@gateworks.com>
Acked-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
When allocating memory to hold the device dma parameters in
vb2_dma_contig_set_max_seg_size(), the requested size is by mistake only
the size of a pointer. Request the correct size instead.
Fixes: 3f03396918 ("media: vb2-dma-contig: add helper for setting dma max seg size")
Signed-off-by: Vincent Stehlé <vincent.stehle@laposte.net>
Cc: Sylwester Nawrocki <s.nawrocki@samsung.com>
Acked-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
The xfer_func, ycbcr_enc and quantization fields should also be copied from
output to capture format.
Since this driver serves as example code it is important that this is handled
correctly.
Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
The adv7511 will automatically fill in the VIC code in the AVI InfoFrame
based on the timings of the incoming pixelport signals.
However, to have this work correctly it needs to specify the fps
value in a register. After doing this the proper VIC code is filled in.
Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Now that all media documentation was converted to Sphinx, we
should get rid of the old DocBook one, as we don't want people
to submit patches against the old stuff.
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Acked-by: Hans Verkuil <hans.verkuil@cisco.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Async compaction detects contention either due to failing trylock on
zone->lock or lru_lock, or by need_resched(). Since 1f9efdef4f ("mm,
compaction: khugepaged should not give up due to need_resched()") the
code got quite complicated to distinguish these two up to the
__alloc_pages_slowpath() level, so different decisions could be taken
for khugepaged allocations.
After the recent changes, khugepaged allocations don't check for
contended compaction anymore, so we again don't need to distinguish lock
and sched contention, and simplify the current convoluted code a lot.
However, I believe it's also possible to simplify even more and
completely remove the check for contended compaction after the initial
async compaction for costly orders, which was originally aimed at THP
page fault allocations. There are several reasons why this can be done
now:
- with the new defaults, THP page faults no longer do reclaim/compaction at
all, unless the system admin has overridden the default, or application has
indicated via madvise that it can benefit from THP's. In both cases, it
means that the potential extra latency is expected and worth the benefits.
- even if reclaim/compaction proceeds after this patch where it previously
wouldn't, the second compaction attempt is still async and will detect the
contention and back off, if the contention persists
- there are still heuristics like deferred compaction and pageblock skip bits
in place that prevent excessive THP page fault latencies
Link: http://lkml.kernel.org/r/20160721073614.24395-9-vbabka@suse.cz
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
In the context of direct compaction, for some types of allocations we
would like the compaction to either succeed or definitely fail while
trying as hard as possible. Current async/sync_light migration mode is
insufficient, as there are heuristics such as caching scanner positions,
marking pageblocks as unsuitable or deferring compaction for a zone. At
least the final compaction attempt should be able to override these
heuristics.
To communicate how hard compaction should try, we replace migration mode
with a new enum compact_priority and change the relevant function
signatures. In compact_zone_order() where struct compact_control is
constructed, the priority is mapped to suitable control flags. This
patch itself has no functional change, as the current priority levels
are mapped back to the same migration modes as before. Expanding them
will be done next.
Note that !CONFIG_COMPACTION variant of try_to_compact_pages() is
removed, as the only caller exists under CONFIG_COMPACTION.
Link: http://lkml.kernel.org/r/20160721073614.24395-8-vbabka@suse.cz
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
After the previous patch, we can distinguish costly allocations that
should be really lightweight, such as THP page faults, with
__GFP_NORETRY. This means we don't need to recognize khugepaged
allocations via PF_KTHREAD anymore. We can also change THP page faults
in areas where madvise(MADV_HUGEPAGE) was used to try as hard as
khugepaged, as the process has indicated that it benefits from THP's and
is willing to pay some initial latency costs.
We can also make the flags handling less cryptic by distinguishing
GFP_TRANSHUGE_LIGHT (no reclaim at all, default mode in page fault) from
GFP_TRANSHUGE (only direct reclaim, khugepaged default). Adding
__GFP_NORETRY or __GFP_KSWAPD_RECLAIM is done where needed.
The patch effectively changes the current GFP_TRANSHUGE users as
follows:
* get_huge_zero_page() - the zero page lifetime should be relatively
long and it's shared by multiple users, so it's worth spending some
effort on it. We use GFP_TRANSHUGE, and __GFP_NORETRY is not added.
This also restores direct reclaim to this allocation, which was
unintentionally removed by commit e4a49efe4e7e ("mm: thp: set THP defrag
by default to madvise and add a stall-free defrag option")
* alloc_hugepage_khugepaged_gfpmask() - this is khugepaged, so latency
is not an issue. So if khugepaged "defrag" is enabled (the default), do
reclaim via GFP_TRANSHUGE without __GFP_NORETRY. We can remove the
PF_KTHREAD check from page alloc.
As a side-effect, khugepaged will now no longer check if the initial
compaction was deferred or contended. This is OK, as khugepaged sleep
times between collapsion attempts are long enough to prevent noticeable
disruption, so we should allow it to spend some effort.
* migrate_misplaced_transhuge_page() - already was masking out
__GFP_RECLAIM, so just convert to GFP_TRANSHUGE_LIGHT which is
equivalent.
* alloc_hugepage_direct_gfpmask() - vma's with VM_HUGEPAGE (via madvise)
are now allocating without __GFP_NORETRY. Other vma's keep using
__GFP_NORETRY if direct reclaim/compaction is at all allowed (by default
it's allowed only for madvised vma's). The rest is conversion to
GFP_TRANSHUGE(_LIGHT).
[mhocko@suse.com: suggested GFP_TRANSHUGE_LIGHT]
Link: http://lkml.kernel.org/r/20160721073614.24395-7-vbabka@suse.cz
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Since THP allocations during page faults can be costly, extra decisions
are employed for them to avoid excessive reclaim and compaction, if the
initial compaction doesn't look promising. The detection has never been
perfect as there is no gfp flag specific to THP allocations. At this
moment it checks the whole combination of flags that makes up
GFP_TRANSHUGE, and hopes that no other users of such combination exist,
or would mind being treated the same way. Extra care is also taken to
separate allocations from khugepaged, where latency doesn't matter that
much.
It is however possible to distinguish these allocations in a simpler and
more reliable way. The key observation is that after the initial
compaction followed by the first iteration of "standard"
reclaim/compaction, both __GFP_NORETRY allocations and costly
allocations without __GFP_REPEAT are declared as failures:
/* Do not loop if specifically requested */
if (gfp_mask & __GFP_NORETRY)
goto nopage;
/*
* Do not retry costly high order allocations unless they are
* __GFP_REPEAT
*/
if (order > PAGE_ALLOC_COSTLY_ORDER && !(gfp_mask & __GFP_REPEAT))
goto nopage;
This means we can further distinguish allocations that are costly order
*and* additionally include the __GFP_NORETRY flag. As it happens,
GFP_TRANSHUGE allocations do already fall into this category. This will
also allow other costly allocations with similar high-order benefit vs
latency considerations to use this semantic. Furthermore, we can
distinguish THP allocations that should try a bit harder (such as from
khugepageed) by removing __GFP_NORETRY, as will be done in the next
patch.
Link: http://lkml.kernel.org/r/20160721073614.24395-6-vbabka@suse.cz
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The retry loop in __alloc_pages_slowpath is supposed to keep trying
reclaim and compaction (and OOM), until either the allocation succeeds,
or returns with failure. Success here is more probable when reclaim
precedes compaction, as certain watermarks have to be met for compaction
to even try, and more free pages increase the probability of compaction
success. On the other hand, starting with light async compaction (if
the watermarks allow it), can be more efficient, especially for smaller
orders, if there's enough free memory which is just fragmented.
Thus, the current code starts with compaction before reclaim, and to
make sure that the last reclaim is always followed by a final
compaction, there's another direct compaction call at the end of the
loop. This makes the code hard to follow and adds some duplicated
handling of migration_mode decisions. It's also somewhat inefficient
that even if reclaim or compaction decides not to retry, the final
compaction is still attempted. Some gfp flags combination also shortcut
these retry decisions by "goto noretry;", making it even harder to
follow.
This patch attempts to restructure the code with only minimal functional
changes. The call to the first compaction and THP-specific checks are
now placed above the retry loop, and the "noretry" direct compaction is
removed.
The initial compaction is additionally restricted only to costly orders,
as we can expect smaller orders to be held back by watermarks, and only
larger orders to suffer primarily from fragmentation. This better
matches the checks in reclaim's shrink_zones().
There are two other smaller functional changes. One is that the upgrade
from async migration to light sync migration will always occur after the
initial compaction. This is how it has been until recent patch "mm,
oom: protect !costly allocations some more", which introduced upgrading
the mode based on COMPACT_COMPLETE result, but kept the final compaction
always upgraded, which made it even more special. It's better to return
to the simpler handling for now, as migration modes will be further
modified later in the series.
The second change is that once both reclaim and compaction declare it's
not worth to retry the reclaim/compact loop, there is no final
compaction attempt. As argued above, this is intentional. If that
final compaction were to succeed, it would be due to a wrong retry
decision, or simply a race with somebody else freeing memory for us.
The main outcome of this patch should be simpler code. Logically, the
initial compaction without reclaim is the exceptional case to the
reclaim/compaction scheme, but prior to the patch, it was the last loop
iteration that was exceptional. Now the code matches the logic better.
The change also enable the following patches.
Link: http://lkml.kernel.org/r/20160721073614.24395-5-vbabka@suse.cz
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>