add arch.c in mach-bcmring
add related header files in mach-bcmring
Signed-off-by: Leo Chen <leochen@broadcom.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Provide more useful introduction for w90x900
Signed-off-by: Wan ZongShun <mcuos.com@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The semaphore is used as mutex so make it a mutex.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Wan ZongShun <mcuos.com@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Add clocksource/clockevent support for w90p910 platform.
Signed-off-by: Wan ZongShun <mcuos.com@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The u300_init_check_chip() function was not properly tagged with
the __init macro and provided a initsection mismatch on
compilation.
Signed-off-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch adds support for TIF_RESTORE_SIGMASK to ARM's
signal handling, which allows to hook up the pselect6, ppoll,
and epoll_pwait syscalls on ARM.
Tested here with eabi userspace and a test program with a
deliberate race between a child's exit and the parent's
sigprocmask/select sequence. Using sys_pselect6() instead
of sigprocmask/select reliably prevents the race.
The other arch's support for TIF_RESTORE_SIGMASK has evolved
over time:
In 2.6.16:
- add TIF_RESTORE_SIGMASK which parallels TIF_SIGPENDING
- test both when checking for pending signal [changed later]
- reimplement sys_sigsuspend() to use current->saved_sigmask,
TIF_RESTORE_SIGMASK [changed later], and -ERESTARTNOHAND;
ditto for sys_rt_sigsuspend(), but drop private code and
use common code via __ARCH_WANT_SYS_RT_SIGSUSPEND;
- there are now no "extra" calls to do_signal() so its oldset
parameter is always ¤t->blocked so need not be passed,
also its return value is changed to void
- change handle_signal() to return 0/-errno
- change do_signal() to honor TIF_RESTORE_SIGMASK:
+ get oldset from current->saved_sigmask if TIF_RESTORE_SIGMASK
is set
+ if handle_signal() was successful then clear TIF_RESTORE_SIGMASK
+ if no signal was delivered and TIF_RESTORE_SIGMASK is set then
clear it and restore the sigmask
- hook up sys_pselect6() and sys_ppoll()
In 2.6.19:
- hook up sys_epoll_pwait()
In 2.6.26:
- allow archs to override how TIF_RESTORE_SIGMASK is implemented;
default set_restore_sigmask() sets both TIF_RESTORE_SIGMASK and
TIF_SIGPENDING; archs need now just test TIF_SIGPENDING again
when checking for pending signal work; some archs now implement
TIF_RESTORE_SIGMASK as a secondary/non-atomic thread flag bit
- call set_restore_sigmask() in sys_sigsuspend() instead of setting
TIF_RESTORE_SIGMASK
In 2.6.29-rc:
- kill sys_pselect7() which no arch wanted
So for 2.6.31-rc6/ARM this patch does the following:
- Add TIF_RESTORE_SIGMASK. Use the generic set_restore_sigmask()
which sets both TIF_SIGPENDING and TIF_RESTORE_SIGMASK, so
TIF_RESTORE_SIGMASK need not claim one of the scarce low thread
flags, and existing TIF_SIGPENDING and _TIF_WORK_MASK tests need
not be extended for TIF_RESTORE_SIGMASK.
- sys_sigsuspend() is reimplemented to use current->saved_sigmask
and set_restore_sigmask(), making it identical to most other archs
- The private code for sys_rt_sigsuspend() is removed, instead
generic code supplies it via __ARCH_WANT_SYS_RT_SIGSUSPEND.
- sys_sigsuspend() and sys_rt_sigsuspend() no longer need a pt_regs
parameter, so their assembly code wrappers are removed.
- handle_signal() is changed to return 0 on success or -errno.
- The oldset parameter to do_signal() is now redundant and removed,
and the return value is now also redundant and changed to void.
- do_signal() is changed to honor TIF_RESTORE_SIGMASK:
+ get oldset from current->saved_sigmask if TIF_RESTORE_SIGMASK
is set
+ if handle_signal() was successful then clear TIF_RESTORE_SIGMASK
+ if no signal was delivered and TIF_RESTORE_SIGMASK is set then
clear it and restore the sigmask
- Hook up sys_pselect6, sys_ppoll, and sys_epoll_pwait.
Signed-off-by: Mikael Pettersson <mikpe@it.uu.se>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Currently, highmem is selectable, and you can request an increased
vmalloc area. However, none of this has any effect on the memory
layout since a patch in the highmem series was accidentally dropped.
Moreover, even if you did want highmem, all memory would still be
registered as lowmem, possibly resulting in overflow of the available
virtual mapping space.
The highmem boundary is determined by the highest allowed beginning
of the vmalloc area, which depends on its configurable minimum size
(see commit 60296c71f6 for details on
this).
We should create mappings and initialize bootmem only for low memory,
while the zone allocator must still be told about highmem.
Currently, memory nodes which are completely located in high memory
are not supported. This is not a huge limitation since systems
relying on highmem support are unlikely to have discontiguous memory
with large holes.
[ A similar patch was meant to be merged before commit 5f0fbf9eca
and be available in Linux v2.6.30, however some git rebase screw-up
of mine dropped the first commit of the series, and that goofage
escaped testing somehow as well. -- Nico ]
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Reviewed-by: Nicolas Pitre <nico@marvell.com>
Conflicts:
kernel/perf_counter.c
Merge reason: update to latest upstream (-rc6) and resolve
the conflict with urgent fixes.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
The SGI UV Broadcast Assist Unit is used to send TLB shootdown
messages to remote nodes of the system. The header of the
message must contain the subnode id of the block in the
receiving hub that handles such messages. It should always be
0x10, the id of the "LB" block.
It had previously been documented as a "must be zero" field.
Signed-off-by: Cliff Wickman <cpw@sgi.com>
Acked-by: Jack Steiner <steiner@sgi.com>
LKML-Reference: <E1Mc1x7-0005Ce-6t@eag09.americas.sgi.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Add the new function read_boot_clock to get the exact time the system
has been started. For architectures without support for exact boot
time a new weak function is added that returns 0. Use the exact boot
time to initialize wall_to_monotonic, or xtime if the read_boot_clock
returned 0.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Ingo Molnar <mingo@elte.hu>
Acked-by: John Stultz <johnstul@us.ibm.com>
Cc: Daniel Walker <dwalker@fifo99.com>
LKML-Reference: <20090814134811.296703241@de.ibm.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
The persistent clock of some architectures (e.g. s390) have a
better granularity than seconds. To reduce the delta between the
host clock and the guest clock in a virtualized system change the
read_persistent_clock function to return a struct timespec.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Ingo Molnar <mingo@elte.hu>
Acked-by: John Stultz <johnstul@us.ibm.com>
Cc: Daniel Walker <dwalker@fifo99.com>
LKML-Reference: <20090814134811.013873340@de.ibm.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
The clocksource structure has two multipliers, the unmodified multiplier
clock->mult_orig and the NTP corrected multiplier clock->mult. The NTP
multiplier is misplaced in the struct clocksource, this is private
information of the timekeeping code. Add the mult field to the struct
timekeeper to contain the NTP corrected value, keep the unmodifed
multiplier in clock->mult and remove clock->mult_orig.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Ingo Molnar <mingo@elte.hu>
Acked-by: John Stultz <johnstul@us.ibm.com>
Cc: Daniel Walker <dwalker@fifo99.com>
LKML-Reference: <20090814134810.149047645@de.ibm.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Add struct timekeeper to keep the internal values timekeeping.c needs
in regard to the currently selected clock source. This moves the
timekeeping intervals, xtime_nsec and the ntp error value from struct
clocksource to struct timekeeper. The raw_time is removed from the
clocksource as well. It gets treated like xtime as a global variable.
Eventually xtime raw_time should be moved to struct timekeeper.
[ tglx: minor cleanup ]
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Ingo Molnar <mingo@elte.hu>
Acked-by: John Stultz <johnstul@us.ibm.com>
Cc: Daniel Walker <dwalker@fifo99.com>
LKML-Reference: <20090814134809.613209842@de.ibm.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
If a non high-resolution clocksource is first set as override clock
and then registered it becomes active even if the system is in one-shot
mode. Move the override check from sysfs_override_clocksource to the
clocksource selection. That fixes the bug and simplifies the code. The
check in clocksource_register for double registration of the same
clocksource is removed without replacement.
To find the initial clocksource a new weak function in jiffies.c is
defined that returns the jiffies clocksource. The architecture code
can then override the weak function with a more suitable clocksource,
e.g. the TOD clock on s390.
[ tglx: Folded in a fix from John Stultz ]
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Acked-by: John Stultz <johnstul@us.ibm.com>
Cc: Daniel Walker <dwalker@fifo99.com>
LKML-Reference: <20090814134808.388024160@de.ibm.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
change_clocksource resets the cycle_last value to zero then sets it to
a value read from the clocksource. The reset to zero is required only
for the TSC clocksource to make the read_tsc function work after a
resume. The reason is that the TSC read function uses cycle_last to
detect backwards going TSCs. In the resume case cycle_last contains
the TSC value from the last update before the suspend. On resume the
TSC starts counting from 0 again and would trip over the cycle_last
comparison.
This is subtle and surprising. Move the reset to a resume function in
the tsc code.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: John Stultz <johnstul@us.ibm.com>
Cc: Daniel Walker <dwalker@fifo99.com>
LKML-Reference: <20090814134808.142191175@de.ibm.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
This is superfluous, as the default CPU type and family are already
established by the initial cpuinfo definition. Given that we are still
able to probe for the CPU family even if we are not able to detect the
subtype, it's preferable to let the probing code fill out what it can and
leave the rest.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This patch simply adds LCDC hwblk_id data for the kfr2r09 board.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This patch updates the SuperH Mobile sleep assembly code with
support for DBSC memory controller found in the sh7724 processor.
Without this fix the memory hooked up to the sh7724 processor
will never enter self-refresh mode before suspending to ram. The
effect of this is that the memory contents most likeley will be
lost upon resume which may or may not be what you want.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This patch updates the Solution Engine 7724 board code to use
in-SoC KEYSC resources for the keyboard platform device. Using
the in-SoC key scan controller fixes a crash-during-resume issue.
Without this patch the KEYSC hardware block located in the board
specific FPGA is used together with an external IRQ which is
routed through the FPGA and handled by some board specific demux
code. This board specific FPGA interrupt code does not implement
desc->set_wake() so the enable_irq_wake() call in the sh_keysc
driver will fail at suspend-to-ram time and the disable_irq_wake()
will bomb out when resuming.
Changing the platform data to use the in-SoC KEYSC hardware makes
the se7724 board support code less special which is a good thing.
Also, the board specific KEYSC pin setup code selects in-SoC pin
functions already which makes the current FPGA platform device data
look like a typo.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This paves the way for allowing individual CPUs to overload the
individual flushing routines that they care about without having to
depend on weak aliases. SH-4 is converted over initially, as it wires
up pretty much everything. The majority of the other CPUs will simply use
the default no-op implementation with their own region flushers wired up.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
We use flush_cache_page() outright in copy_to_user_page(), and nothing
else needs it, so just kill it off. SH-5 still defines its own version,
but that too will go away in the same fashion once it converts over.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
All of the flush_dcache_mmap_lock()/flush_dcache_mmap_unlock()
definitions are identical across all CPUs, so just provide them
generically in asm/cacheflush.h.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
flush_dcache_all() is used internally by the SH-4 cache code, it is not
part of the exported cache API, so make it static and don't export it.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This migrates the alias computation and printing of probed cache
parameters from the SH-4 code to the shared cpu_cache_init().
This permits other platforms with aliases to make use of the same
probe logic without having to roll their own, and also produces
consistent output regardless of platform.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This provides a central point for CPU cache initialization routines.
This replaces the antiquated p3_cache_init() method, which the vast
majority of CPUs never cared about.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adds a family member to struct sh_cpuinfo, which allows us to fall
back more on the probe routines to work out what sort of subtype we are
running on. This will be used by the CPU cache initialization code in
order to first do family-level initialization, followed by subtype-level
optimizations.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
These were previous littered around tlb-nommu.c and pg-nommu.c, though at
this point there are more stubs than are strictly TLB or page op related,
so just consolidate them in a single nommu.c.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This does a bit of reorganizing for allowing nommu to use the new
and generic cache.c, no functional changes.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This builds in the newly created cache.c (renamed from pg-mmu.c) for both
MMU and NOMMU configurations. The kmap_coherent() stubs and alias
information recorded by each CPU family takes care of doing the right
thing while enabling the code to be commonly shared.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This plugs in kmap_coherent() for the non-SH4 cases to permit the
pg-mmu.c bits to be used generically across all CPUs. SH-5 is still in
the TODO state, but will move over to fixmap and the generic interface
gradually.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This kills off the ifdef from kmap_coherent_init() and just bails if
there are no cache aliases. This permits the kmap coherent code to be
used on other CPUs.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The way that the CFA is calculated can change as we progress through a
function. If we see a DW_CFA_def_cfa_register op we need to reset the
frame's cfa_offset value which may have been previously setup.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This implements EXPMASK initialization code for SH-4A parts, where it is
possible to disable compat features that will go away in newer cores.
Presently this includes disabling support for non-nop instructions in the
rte delay slot, as well as a sleep instruction being placed in a delay
slot (neither of which the kernel does any longer). As a result of this,
any future offenders will have illegal slot exceptions generated for
them.
Associative writes for the memory-mapped cache array are still left
enabled, until such a point that special cache operations for SH-4A are
provided to move off of the current (and rather dated) SH-4 versions.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Future SH parts do not support any instruction but a nop in the rte delay
slot, so make the change for all offending parts. SH-5 is excluded from
this, and already has its own set of restrictions with regards to rte
delay slot handling.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
S3 sleep requires special setup in tboot. However, the data
structures needed to do such setup are only available if
CONFIG_ACPI_SLEEP is enabled. Abstract them out as much as possible,
so we can have a single tboot_setup_sleep() which either is a proper
implementation or a stub which simply calls BUG().
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Acked-by: Shane Wang <shane.wang@intel.com>
Cc: Joseph Cihula <joseph.cihula@intel.com>
This only bothers with the TLB entry flush in the case of the initial
page write exception, as it is unecessary in the case of the load/store
exceptions.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adds a bit of rework to have the TLB protection violations skip the
TLB miss fastpath and go directly in to do_page_fault(), as these require
slow path handling.
Based on an earlier patch by SUGIOKA Toshinobu.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This optimizes for the cases when a CPU does not yet have a valid ASID
context associated with it, as in this case there is no work for any of
flush_cache_mm()/flush_cache_page()/flush_cache_range() to do. Based on
the the MIPS implementation.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Now with all of the prep work out of the way, kill off the SH-5 variants
and use the SH-4 version directly. This also takes advantage of the
unrolling that was previously done for the new version.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This plugs in some register alignment helpers for the shared flushers,
allowing them to also be used on SH-5. The main rationale here is that
in the SH-5 case we have a variable ABI, where the pointer size may not
equal the register width. This register extension is taken care of by
the SH-5 code already today, and is otherwise unused on the SH-4 code.
This combines the two and allows us to kill off the SH-5 implementation.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This inserts a ULONG_MAX entry at the end of the valid entries in the
stack trace buffer so the default code doesn't need to scan to the end of
available slots. This also makes the trace buffer termination behaviour
consistent with the other architectures.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>