Fix typo about chart to map the kernel priority to user land priorities.
* About sched_setscheduler(2)
Processes scheduled under SCHED_FIFO or SCHED_RR
can have a (user-space) static priority in the range 1 to 99.
(reference: http://www.kernel.org/doc/man-pages/online/pages/
man2/sched_setscheduler.2.html)
* From: Steven Rostedt
0 to 98 - maps to RT tasks 99 to 1 (SCHED_RR or SCHED_FIFO)
99 - maps to internal kernel threads that want to be lower than RT tasks
but higher than SCHED_OTHER tasks. Although I'm not sure if any
kernel thread actually uses this. I'm not even sure how this can be
set, because the internal sched_setscheduler function does not allow
for it.
100 to 139 - maps nice levels -20 to 19. These are not set via
sched_setscheduler, but are set via the nice system call.
140 - reserved for idle tasks.
Signed-off-by: GeunSik Lim <geunsik.lim@samsung.com>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
This fixes a nasty crash and highlights a bug that we were
freeing failed-fork() counters incorrectly.
(the fix for that will come separately)
[ Impact: fix crashes/lockups with inherited counters ]
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: John Kacur <jkacur@redhat.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Peter noticed that we are sometimes reading cpuctx->task_ctx with
interrupts enabled.
Noticed-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: John Kacur <jkacur@redhat.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
DMA-mapping.txt says that debug_dma_sync_sg family must be called with
the _same_ one you passed into the dma_map_sg call, it should _NOT_ be
the 'count' value _returned_ from the dma_map_sg call.
debug_dma_sync_sg_for_cpu and debug_dma_sync_sg_for_device can't
handle this properly; they need to use the sg_mapped_ents in struct
dma_debug_entry as debug_dma_unmap_sg() does.
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
debug_dma_map_sg() and debug_dma_unmap_sg() use length in struct
scatterlist while debug_dma_sync_sg_for_cpu() and
debug_dma_sync_sg_for_device() use dma_length. This causes bugs
warnings on some IOMMU implementations since these values are not
same; the length doesn't represent the dma length.
We always need to use sg_dma_len() accessor to get the dma length of a
scatterlist entry.
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Architectures might not have dma_address in struct scatterlist (PARISC
doesn't). Directly accessing to dma_address in struct scatterlist is
wrong; we need to use sg_dma_address() accesssor instead.
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
http://bugzilla.kernel.org/show_bug.cgi?id=13312
at76_dwork_hw_scan holds a mutex while calling ieee80211_scan_completed,
which then calls at76_config which needs the same mutex. This reworks
the ordering to not hold the lock while calling ieee80211_scan_completed.
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Erase errors such as:
"Newly-erased block contained word 0xa4ef223e at offset 0x0296a014"
and failure to write the clean marker,
moves the offending erase block to erasing list before calling
jffs2_erase_failed(). This is bad as jffs2_erase_failed() will
also move the block to the bad_list, but is now moving the
wrong block, causing FS corruption.
Signed-off-by: Joakim Tjernlund <Joakim.Tjernlund@transmode.se>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
The following patch fixes:
- re-initialization of host->col_addr which is used as byte index
between the successive READID flash commands.
- compile error when CONFIG_PM is enabled
- pass on the error code from clk_get()
- return -ENOMEM in case of failed ioremap()
- pass on the return value of platform_driver_probe() directly
- remove excessive printk
- let command line partition table parsing with mxc_nand name.
The cmd_line parsing is done via <mtd-id> name that differs
from mxc_nand by default and looks like "NAND 256MiB 1,8V 8-bit"
Signed-off-by: Vladimir Barinov <vbarinov@embeddedalley.com>
Signed-off-by: Lothar Wassmann <LW@KARO-electronics.de>
Acked-by: Sascha Hauer <s.hauer@pengutronix.de>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Peter Zijlstra pointed out that under some circumstances, we can take
the mutex in a context or a counter and then swap that context or
counter to another task, potentially leading to lock order inversions
or the mutexes not protecting what they are supposed to protect.
This fixes the problem by making sure that we never take a mutex in a
context or counter which could get swapped to another task. Most of
the cases where we take a mutex is on a top-level counter or context,
i.e. a counter which has an fd associated with it or a context that
contains such a counter. This adds WARN_ON_ONCE statements to verify
that.
The two cases where we need to take the mutex on a context that is a
clone of another are in perf_counter_exit_task and
perf_counter_init_task. The perf_counter_exit_task case is solved by
uncloning the context before starting to remove the counters from it.
The perf_counter_init_task is a little trickier; we temporarily
disable context swapping for the parent (forking) task by setting its
ctx->parent_gen to the all-1s value after locking the context, if it
is a cloned context, and restore the ctx->parent_gen value at the end
if the context didn't get uncloned in the meantime.
This also moves the increment of the context generation count to be
within the same critical section, protected by the context mutex, that
adds the new counter to the context. That way, taking the mutex is
sufficient to ensure that both the counter list and the generation
count are stable.
[ Impact: fix hangs, races with inherited and PID counters ]
Signed-off-by: Paul Mackerras <paulus@samba.org>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <18975.31580.520676.619896@drongo.ozlabs.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Fix the build for CONFIG_NET_POLL_CONTROLLER that I broke with
217cbfa856 ("mac8390: fix regression
caused during net_device_ops conversion").
Signed-off-by: Finn Thain <fthain@telegraphics.com.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Do not call t3_link_fault() under spinlock, as it calls msleep().
Besides, only the access to pi->link_fault needs to be serialized.
Also initialize local variables before checking the link status,
link state fields might otherwise end up containing garbage.
Signed-off-by: Divy Le Ray <divy@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Commit 5e68b772e6
cxgb3: map entire Rx page, feed map+offset to Rx ring.
introduced a regression on platforms defining DECLARE_PCI_UNMAP_ADDR()
and related macros as no-ops.
Rx descriptors are fed with the a page buffer bus address + page chunk offset.
The page buffer bus address is set and retrieved through
pci_unamp_addr_set(), pci_unmap_addr().
These functions being meaningless on x86 (if CONFIG_DMA_API_DEBUG is not set).
The HW ends up with a bogus bus address.
This patch saves the page buffer bus address for all plaftorms.
Signed-off-by: Divy Le Ray <divy@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
We introduce the extra pass to allow the print-out to possibly
rely on already read counters.
[ Impact: cleanup ]
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: John Kacur <jkacur@redhat.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Always use NMI for performance-monitoring interrupt as there could be
racy situations if we switch between irq and nmi mode frequently.
Signed-off-by: Yong Wang <yong.y.wang@intel.com>
LKML-Reference: <20090529052835.GA13657@ywang-moblin2.bj.intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Testing the i7300_idle driver on i5000-series hardware required
an edit to i7300_idle.h to "#define SUPPORT_I5000 1" and a re-build
of both i7300_idle and ioat_dma.
Replace that build-time scheme with a load-time module parameter:
"7300_idle.forceload=1" to make it easier to test the driver
on hardware that while not officially validated, works fine
and is much more commonly available.
By default (no modparam) the driver will continue to load
only on the i7300.
Note that ioat_dma runs a copy of i7300_idle's probe routine
to know to reserve an IOAT channel for i7300_idle.
This change makes ioat_dma do that always on the i5000,
just like it does on the i7300.
Signed-off-by: Len Brown <len.brown@intel.com>
Acked-by: Andrew Henroid <andrew.d.henroid@intel.com>
Now both perf top and report use the same routines.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <20090528175541.GG4747@ghostprotocols.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Will be used by perf top.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <20090528175526.GF4747@ghostprotocols.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Now one has just to use dso__load_kernel() optionally passing a vmlinux
filename.
Will make things easier for perf top that will want to pass a callback
to filter some symbols.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
When creating a dso instance allow asking that all symbols in this dso
have a private area just before the symbol.
perf top will use this for its counters, etc.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <20090528175513.GD4747@ghostprotocols.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Will be used by perf top as well.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <20090528175504.GC4747@ghostprotocols.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
We also fix a problem with cleaning up properly when initializing
drivers and devices, so checks like this will work successfully.
Portions of the patch by Linus and Greg and Ingo.
Reported-by: Ozan Çağlayan <ozan@pardus.org.tr>
Signed-off-by: Kay Sievers <kay.sievers@vrfy.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
We don't need a kernel thread per CPU for this application.
Acked-by: Alex Chiang <achiang@hp.com>
Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This reverts commit 26e1287594.
A larger patch (f7e7aa585) a few days after this one added the same line
to the Makefile, but in a different place. While it'd be more correct to
revert that one, it's easier to revert this one because this is a
one-liner.
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
CC: Greg Kroah-Hartman <gregkh@suse.de>
CC: linux-usb@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This patch (as1244) fixes a crash in usb-serial that occurs when a
sub-driver returns a positive value from its attach method, indicating
that new firmware was loaded and the device will disconnect and
reconnect. The usb-serial core then skips the step of registering the
port devices; when the disconnect occurs, the attempt to unregister
the ports fails dramatically.
This problem shows up with Keyspan devices and it might affect others
as well.
When the attach method returns a positive value, the patch sets
num_ports to 0. This tells usb_serial_disconnect() not to try
unregistering any of the ports; instead they are cleaned up by
destroy_serial().
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Tested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: stable <stable@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The option driver (and presumably others) allocates several URBs when it
opens and tries to free them when it closes. The isp1760_urb_dequeue
function gets called, but the packet being dequeued is not necessarily at
the
front of one of the 32 queues. If not, the isp1760_urb_done function doesn't
get called for the URB and the process trying to free it hangs forever on a
wait_queue. This patch does two things. If the URB being dequeued has others
queued behind it, it re-queues them. And it searches the queues looking for
the URB being dequeued rather than just looking at the one at the front of
the queue.
[bigeasy@linutronix] whitespace fixes, reformating
Cc: stable <stable@kernel.org>
Signed-off-by: Warren Free <wfree@ipmn.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This patch adds another quirky Conexant USB Modem Clone to usb cdc-acm.c
Signed-off-by: Xiao Kaijian <xiaokj@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This ensures that all fields are properly initialized.
Signed-off-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Acked-by: David Brownell <dbrownell@users.sourceforge.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
usbtest #14 was failing with "udc: ep0: TXCOMP: Invalid endpoint state 2, halting endpoint..."
This occured since ep0 is bidirectional and ep->is_in is not valid (must always use ep->state)
Signed-off-by: Martin Fuzzey <mfuzzey@gmail.com>
Acked-by: David Brownell <dbrownell@users.sourceforge.net>
Acked-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Add cmpxchg/cmpxchg64 support for ARMv6K and ARMv7 systems
(original patch from Catalin Marinas <catalin.marinas@arm.com>)
The cmpxchg and cmpxchg64 functions can be implemented using the
LDREX*/STREX* instructions. Since operand lengths other than 32bit are
required, the full implementations are only available if the ARMv6K
extensions are present (for the LDREXB, LDREXH and LDREXD instructions).
For ARMv6, only 32-bits cmpxchg is available.
Mathieu :
Make cmpxchg_local always available with best implementation for all type sizes (1, 2, 4 bytes).
Make cmpxchg64_local always available.
Use "Ir" constraint for "old" operand, like atomic.h atomic_cmpxchg does.
Change since v3 :
- Add "memory" clobbers (thanks to Nicolas Pitre)
- removed __asmeq(), only needed for old compilers, very unlikely on ARMv6+.
Note : ARMv7-M should eventually be ifdefed-out of cmpxchg64. But it's not
supported by the Linux kernel currently.
Put back arm < v6 cmpxchg support.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
CC: Catalin Marinas <catalin.marinas@arm.com>
CC: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Shirish Pargaonkar <shirishp@us.ibm.com>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
Mathieu Desnoyers pointed out that the ARM barriers were lacking:
- cmpxchg, xchg and atomic add return need memory barriers on
architectures which can reorder the relative order in which memory
read/writes can be seen between CPUs, which seems to include recent
ARM architectures. Those barriers are currently missing on ARM.
- test_and_xxx_bit were missing SMP barriers.
So put these barriers in. Provide separate atomic_add/atomic_sub
operations which do not require barriers.
Reported-Reviewed-and-Acked-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Sometimes devices send us their responses in time but due to
unfortunate scheduling decisions the receiving thread does not
get scheduled till much later and we erroneously decide that
device timed out. Work around this problem by checking whether we
received the data we needed instead of checking timeout
condition.
Tested-by: Sitsofe Wheeler <sitsofe@yahoo.com>
Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
Thus spake Christoph:
"But this whole set_cifs_acl function is a real mess anyway and needs
some splitting up."
With this change too, it's possible to call acl_to_uid_mode() with a
NULL inode pointer. That (or something close to it) will eventually be
necessary when cifs_get_inode_info is reorganized.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Shirish Pargaonkar <shirishp@us.ibm.com>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
This will test the automatic aperture enlargement code. This is
important because only very few devices will ever trigger this code
path. So force it under CONFIG_IOMMU_STRESS.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Disabling the round-robin allocator results in reusing the same
dma-addresses again very fast. This is a good test if the iotlb flushing
is working correctly.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
This patch makes sure no reserved addresses are allocated in an dma_ops
domain when the aperture is increased dynamically.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>