Reading from the ROM is not a good idea as it could disturb some
flash operation that it is in progress.
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The SH instruction set has several instructions which accept an 8 bit
immediate operand. For logical instructions this operand is zero extended,
for arithmetic instructions the operand is sign extended. After adding an
option to the assembler to check this, it was found that several pieces
of assembly code were assuming this behaviour, and in one case
getting it wrong.
So this patch explicitly sign extends any immediate operands, which makes
it obvious what is happening, and fixes the one case which got it wrong.
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
So far kernel command line arguments could be passed in by a bootloader
or defined as CONFIG_CMDLINE, which completely overwriting the first one.
This change allows a developer to declare selected kernel parameters in
a kernel configuration (eg. project-specific defconfig), retaining
possibility of passing others by a bootloader.
The obvious examples of the first type are MTD partition or
bigphysarea-like region definitions, while "debug" option or network
configuration should be given by a bootloader or a JTAG boot script.
Signed-off-by: Pawel Moll <pawel.moll@st.com>
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This patches will trigger a reboot using the watchdog
timer instead of double fault. Unlike the previous
method, this one actually works in 32 bit mode.
Reset should also be cleaner.
Signed-off-by: Jon Frosdick <jon.frosdick@st.com>
Signed-off-by: Carl Shaw <carl.shaw@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Save the VBR allowing GDB to dump full registers set but do not reload it
as soon as the kgdb_handle_exception is invoked.
Signed-off-by: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The synopsys PCI cell used in the later STMicro chips requires code to
be run in order to do IO cycles, rather than just memory mapping the IO
space. Rather than extending the existing SH infrastructure to allow
this, use the GENERIC_IOMAP implmentation to save re-inventing the
wheel.
This set of changes allows the SH to be built with GENERIC_IOMAP
enabled, it just ifdef's out the functions provided by the GENERIC_IOMAP
implementation, and provides a few required missing functions.
Signed-off-by: David McKay <david.mckay@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
GCC does not issue unwind information for function epilogues.
Unfortunately we can catch a signal during an epilogue. The signal
handler writes the current context and signal return code onto the stack
overwriting previous contents. During unwinding, libgcc can try to
restore registers from the stack and restores corrupted ones. This can
lead to segmentation, misaligned access and sigbus faults.
For example, consider the following code:
mov.l r12,@-r15
mov.l r14,@-r15
sts.l pr,@-r15
mov r15,r14
<do stuff>
mov r14, r15
lds.l @r15+, pr
<<< SIGNAL HERE
mov.l @r15+, r14
mov.l @r15+, r12
rts
Unwind is aware that pr was pushed to stack in prolog, so tries to
restore it. Unfortunately it restores the last word of the signal
handler code placed on the stack by the kernel.
This patch tries to avoid the problem by adding a guard region on the
stack between where the function pushes data and where the signal handler
pushes its return code. We probably don't see this problem often because
exception handling unwinding in an epilogue only occurs due to a pthread
cancel signal. Also the kernel signal stack handler alignment of 8 bytes
could hide the occurance of this problem sometimes as the stack may not
be trampled at a particular required word.
This is not guaranteed to always work. It relies on a frame pointer
existing for the function (so it can get the correct sp value) which is
not always the case for the SH4.
Modifications will also be made to libgcc for the case where there is no
fp.
Signed-off-by: Carl Shaw <carl.shaw@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This patch fixes a few problems with the existing code in do_address_error().
a) the variable used to printk()d the offending instruction wasn't
initialized correctly. This is a fix to bug 5727
b) behaviour for CONFIG_CPU_SH2A wasn't correct
c) the 'ignore address error' behaviour didn't update the PC, causing an
infinite loop.
Signed-off-by: Andre Draszik <andre.draszik@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This patch brings the SH4 misaligned trap handler in line with what
happens on ARM:
Add a /proc/cpu/alignment which can be read from to get alignment
trap statistics and written to to influence the behaviour of the
alignment trap handling. The value to write is a bitfield, which
has the following meaning: 1 warn, 2 fixup, 4 signal
In addition, we add a /proc/cpu/kernel_alignment, to enable or
disable warnings in case of kernel code causing alignment errors.
Signed-off by: Andre Draszik <andre.draszik@st.com>
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This patch makes sure we see messages about unaligned access fixups
every now and then. Else especially userspace apps suffering from
bad programming won't ever be noticed...
Signed-off by: Andre Draszik <andre.draszik@st.com>
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
setup_arch() unconditionally sets the preferred console to ttyS.
This breaks the use of 3270 devices as the console. Provide a new
function to set the default preferred console for s390. The preferred
console depends on the conmode parameter that is used to switch
between 3270 and 3215 terminal/console mode.
Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Add another option when selecting CPU family so the kernel can be
optimized for Intel Atom CPUs. If GCC supports tuning options for
Intel Atom they will be used.
Signed-off-by: Tobias Doerffel <tobias.doerffel@gmail.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
LKML-Reference: <1251018457-19157-1-git-send-email-tobias.doerffel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
The Runtime PM patch for UIO driver implements coarse grained
dynamic power management for UIO devices. With that patch in
place we can get rid of the static clock configuration. Which
in turn makes it possible for cpuidle to enter deeper sleep.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
With the Runtime PM driver changes in place, we must have
Runtime PM support in place. Otherwise there is no way to
enable clocks to the Runtime PM enabled hardware blocks.
This patch makes Runtime PM mandatory on SuperH Mobile.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The runtime PM for SH-Mobile code had platform_bus_notify() as __devinit,
which is rather bogus. Kill off the annotation, which subsequently
silences the section mismatch warnings.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This patch is V3 of the SuperH Mobile Runtime PM platform bus
implentation matching Rafael's Runtime PM v16.
The code gets invoked from the SuperH specific Runtime PM
platform bus functions that override the weak symbols for:
- platform_pm_runtime_suspend()
- platform_pm_runtime_resume()
- platform_pm_runtime_idle()
This Runtime PM implementation performs two levels of power
management. At the time of platform bus runtime suspend the
clock to the device is stopped instantly. Later on if all
devices within the power domain has their clocks stopped
then the device driver ->runtime_suspend() callbacks are
used to save hardware register state for each device.
Device driver ->runtime_suspend() calls are scheduled from
cpuidle context using platform_pm_runtime_suspend_idle().
When all devices have been fully suspended the processor
is allowed to enter deep sleep from cpuidle.
The runtime resume operation turns on clocks and also
restores registers if needed. It is worth noting that the
devices start in a suspended state and the device driver
is responsible for calling runtime resume before accessing
the actual hardware.
In this particular platform bus implementation runtime
resume is not allowed from interrupt context. Runtime
suspend is however allowed from interrupt context as
long as the synchronous functions are avoided.
[ updated for v17 -- PFM. ]
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Fix the following build problem on powerpc:
arch/powerpc/kernel/time.c: In function 'read_persistent_clock':
arch/powerpc/kernel/time.c:788: error: 'return' with a value, in function returning void
arch/powerpc/kernel/time.c:791: error: 'return' with a value, in function returning void
Reported-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: dwalker@fifo99.com
Cc: johnstul@us.ibm.com
LKML-Reference: <20090822222313.74b9619c@skybase>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
The CIE and FDE structs are big enough and accessed regularly enough in
certain configurations to make cacheline alignment useful.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
mtr_aps_delayed_init was declared u32 and made global, but it only
ever takes boolean values and is only ever used in
arch/x86/kernel/cpu/mtrr/main.c. Declare it "static bool" and remove
external references.
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Cc: Suresh Siddha <suresh.b.siddha@intel.com>
SDM Vol 3a section titled "MTRR considerations in MP systems" specifies
the need for synchronizing the logical cpu's while initializing/updating
MTRR.
Currently Linux kernel does the synchronization of all cpu's only when
a single MTRR register is programmed/updated. During an AP online
(during boot/cpu-online/resume) where we initialize all the MTRR/PAT registers,
we don't follow this synchronization algorithm.
This can lead to scenarios where during a dynamic cpu online, that logical cpu
is initializing MTRR/PAT with cache disabled (cr0.cd=1) etc while other logical
HT sibling continue to run (also with cache disabled because of cr0.cd=1
on its sibling).
Starting from Westmere, VMX transitions with cr0.cd=1 don't work properly
(because of some VMX performance optimizations) and the above scenario
(with one logical cpu doing VMX activity and another logical cpu coming online)
can result in system crash.
Fix the MTRR initialization by doing rendezvous of all the cpus. During
boot and resume, we delay the MTRR/PAT init for APs till all the
logical cpu's come online and the rendezvous process at the end of AP's bringup,
will initialize the MTRR/PAT for all AP's.
For dynamic single cpu online, we synchronize all the logical cpus and
do the MTRR/PAT init on the AP that is coming online.
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Some of the NOPs tables aren't used on 64-bits, quite some code and
data is needed post-init for module loading only, and a couple of
functions aren't used outside that file (i.e. can be static, and don't
need to be exported).
The change to __INITDATA/__INITRODATA is needed to avoid an assembler
warning.
Signed-off-by: Jan Beulich <jbeulich@novell.com>
LKML-Reference: <4A8BC8A00200007800010823@vpn.id2.novell.com>
Acked-by: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
sh64 does not yet support GENERIC_BUG, but still wants unwinder support.
Alias UNWINDER_BUG and UNWINDER_BUG_ON to their BUG counterparts until
the conversion to GENERIC_BUG is completed.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This simplifies the unwinder trap handling, dropping the use of the
special trapa vector and simply piggybacking on top of the BUG support. A
new BUGFLAG_UNWINDER is added for flagging the unwinder fault, before
continuing on with regular BUG dispatch.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
After talking with some application writers who want very fast, but not
fine-grained timestamps, I decided to try to implement new clock_ids
to clock_gettime(): CLOCK_REALTIME_COARSE and CLOCK_MONOTONIC_COARSE
which returns the time at the last tick. This is very fast as we don't
have to access any hardware (which can be very painful if you're using
something like the acpi_pm clocksource), and we can even use the vdso
clock_gettime() method to avoid the syscall. The only trade off is you
only get low-res tick grained time resolution.
This isn't a new idea, I know Ingo has a patch in the -rt tree that made
the vsyscall gettimeofday() return coarse grained time when the
vsyscall64 sysctrl was set to 2. However this affects all applications
on a system.
With this method, applications can choose the proper speed/granularity
trade-off for themselves.
Signed-off-by: John Stultz <johnstul@us.ibm.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: nikolag@ca.ibm.com
Cc: Darren Hart <dvhltc@us.ibm.com>
Cc: arjan@infradead.org
Cc: jonathan@jonmasters.org
LKML-Reference: <1250734414.6897.5.camel@localhost.localdomain>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
This basically reverts commit 1a0c009ac (x86: unregister PIT
clocksource when PIT is disabled) because the problem which was tried
to address with that patch has been solved by commit 3f68535ada
(clocksource: sanity check sysfs clocksource changes).
The problem addressed by the original patch is that PIT could be
selected as clocksource after the system switched the PIT off or set
the PIT into one shot mode which would result in complete timekeeping
wreckage.
Now with the sysfs sanity check in place PIT cannot be selected again
when the system is in oneshot mode. The system will not switch to one
shot mode as long as PIT is installed because PIT is not suitable for
one shot.
The shutdown case which happens when the lapic timer is installed is
covered by the fact that init_pit_clocksource() is called after the
lapic timer take over and then does not install the PIT clocksource
at all.
We should have done the sanity checks back then, but ...
This also solves the locking problem which was reported vs. the
clocksource rework.
LKML-Reference: <new-submission>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: john stultz <johnstul@us.ibm.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
If the oprofile code is built as a module, unwind_stack() as used by the
oprofile backtrace code is not available, causing build breakage.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
As noted in 83d349f35e ("x86: don't send
an IPI to the empty set of CPU's"), some APIC's will be very unhappy
with an empty destination mask. That commit added a WARN_ON() for that
case, and avoided the resulting problem, but didn't fix the underlying
reason for why those empty mask cases happened.
This fixes that, by checking the result of 'cpumask_andnot()' of the
current CPU actually has any other CPU's left in the set of CPU's to be
sent a TLB flush, and not calling down to the IPI code if the mask is
empty.
The reason this started happening at all is that we started passing just
the CPU mask pointers around in commit 4595f9620 ("x86: change
flush_tlb_others to take a const struct cpumask"), and when we did that,
the cpumask was no longer thread-local.
Before that commit, flush_tlb_mm() used to create it's own copy of
'mm->cpu_vm_mask' and pass that copy down to the low-level flush
routines after having tested that it was not empty. But after changing
it to just pass down the CPU mask pointer, the lower level TLB flush
routines would now get a pointer to that 'mm->cpu_vm_mask', and that
could still change - and become empty - after the test due to other
CPU's having flushed their own TLB's.
See
http://bugzilla.kernel.org/show_bug.cgi?id=13933
for details.
Tested-by: Thomas Björnell <thomas.bjornell@gmail.com>
Cc: stable@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The default_send_IPI_mask_logical() function uses the "flat" APIC mode
to send an IPI to a set of CPU's at once, but if that set happens to be
empty, some older local APIC's will apparently be rather unhappy. So
just warn if a caller gives us an empty mask, and ignore it.
This fixes a regression in 2.6.30.x, due to commit 4595f9620 ("x86:
change flush_tlb_others to take a const struct cpumask"), documented
here:
http://bugzilla.kernel.org/show_bug.cgi?id=13933
which causes a silent lock-up. It only seems to happen on PPro, P2, P3
and Athlon XP cores. Most developers sadly (or not so sadly, if you're
a developer..) have more modern CPU's. Also, on x86-64 we don't use the
flat APIC mode, so it would never trigger there even if the APIC didn't
like sending an empty IPI mask.
Reported-by: Pavel Vilim <wylda@volny.cz>
Reported-and-tested-by: Thomas Björnell <thomas.bjornell@gmail.com>
Reported-and-tested-by: Martin Rogge <marogge@onlinehome.de>
Cc: Mike Travis <travis@sgi.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: stable@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This line looks suspicious, because if this is true, then the
'flags' parameter of function reserve_bootmem_generic() will be
unused when !CONFIG_NUMA. I don't think this is what we want.
Signed-off-by: WANG Cong <amwang@redhat.com>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: akpm@linux-foundation.org
LKML-Reference: <20090821083709.5098.52505.sendpatchset@localhost.localdomain>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
The system will die if the kernel is booted with "reservetop"
parameter, in present code, parse "reservetop" parameter after
early_ioremap_init(), and some function still use
early_ioremap() after it.
The problem is, "reservetop" parameter can modify
'FIXADDR_TOP', then the virtual address got by early_ioremap()
is base on old 'FIXADDR_TOP', but the page mapping is base on
new 'FIXADDR_TOP', it will occur page fault, and the IDT is not
prepare yet, so, the system is dead.
So, put parse_early_param() in the front of
early_ioremap_init() in this patch.
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Cc: yinghai@kernel.org
Cc: Andrew Morton <akpm@linux-foundation.org>
LKML-Reference: <4A8D402F.4080805@cn.fujitsu.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Allow a DWARF register to have an undefined value. When applied to the
DWARF return address register this lets lets us label a function as
having no direct caller, e.g. kernel_thread_helper().
Signed-off-by: Matt Fleming <matt@console-pimps.org>
The 'end' member of struct dwarf_fde denotes one byte past the end of
the CFA instruction stream for an FDE. The value of 'end' was being
calcualted incorrectly, it was being set too high. This resulted in
dwarf_cfa_execute_insns() interpreting data past the end of valid
instructions, thus causing all sorts of weird crashes.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
When CONFIG_DWARF_UNWINDER is enabled setup r14 in handle_interrupt, so
that we can figure out what function was running when we were
interrupted.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
We can't assume that if we execute the unwinder code and the unwinder
was already running that it has faulted. Clearly two kernel threads can
invoke the unwinder at the same time and may be running simultaneously.
The previous approach used BUG() and BUG_ON() in the unwinder code to
detect whether the unwinder was incapable of unwinding the stack, and
that the next available unwinder should be used instead. A better
approach is to explicitly invoke a trap handler to switch unwinders when
the current unwinder cannot continue.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
The handling of DW_CFA_val_offset ops was incorrectly using the
DWARF_REG_OFFSET flag but the register's value cannot be calculated
using the DWARF_REG_OFFSET method. Create a new flag to indicate that a
different method must be used to calculate the register's value even
though there is no implementation for DWARF_VAL_OFFSET yet; it's mainly
just a place holder.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Plug a memory leak in dwarf_unwinder_dump() where we didn't free the
memory that we had previously allocated for the DWARF frames and DWARF
registers.
Now is also a opportune time to implement our own mempool and kmem
cache. It's a good idea to have a certain number of frame and register
objects in reserve at all times, so that we are guaranteed to have our
allocation satisfied even when memory is scarce. Since we have pools to
allocate from we can implement the registers for each frame as a linked
list as opposed to a sparsely populated array. Whilst it's true that the
lookup time for a linked list is larger than for arrays, there's only
usually a maximum of 8 registers per frame. So the overhead isn't that
much of a concern.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Add core support for the range of S3C24XX Simtec boards with TLV320AIC23
CODECs on them. Since there are also boards with similar IIS routing the
AMP and the configuration code is placed in a core file for re-use with
other CODEC bindings.
Signed-off-by: Ben Dooks <ben@simtec.co.uk>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
on_each_cpu() takes care of IRQ and preempt handling, the localized
handling in each of the called functions can be killed off.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This does a bit of rework for making the cache flushers SMP-aware. The
function pointer-based flushers are renamed to local variants with the
exported interface being commonly implemented and wrapping as necessary.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
All CPU-specific overloads are done at runtime now, so this common header
can go away and simply be folded back in to asm/ version.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Update the kfr2r09 defconfig with support for LCDC and USB gadget.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Add romImage defconfig for the kfr2r09 board. This defconfig
should be used to build the kernel based boot loader.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>