* Use task->data_phase in do_rw_taskfile() to decide what to do.
* task->prehandler is only used by TASKFILE[_MULTI]_OUT so just
use pre_task_out_intr() directly and remove no longer needed
'prehandler' field from ide_task_t.
* Remove no longer needed ide_pre_handler_t type.
There should be no functionality changes caused by this patch.
Acked-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Based on the earlier work by Tejun Heo.
task->data_phase == TASKFILE_MULTI_{IN,OUT} vs drive->mult_count == 0
check is needed also for ide_taskfile_ioctl() requests that don't have
IDE_TFLAG_FLAGGED taskfile flag set.
Cc: Tejun Heo <htejun@gmail.com>
Acked-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
* Add IDE_TFLAG_IN_DATA taskfile flag to indicate the need of reading
IDE_DATA_REG in ide_end_drive_cmd().
Set the new flag in ide_taskfile_ioctl() if ->in_flags.b.data is set.
* Add IDE_TFLAG_FLAGGED_SET_IN_FLAGS taskfile flag to indicate the
need of modifying ->in_flags in ide_taskfile_ioctl().
Set the new flag in flagged_taskfile() and move the code modifying
->tf_in_flags to ide_taskfile_ioctl().
While at it remove the bogus comment: ->tf_in_flags (except .b.data)
have no effect on selection of registers to read.
* Remove no longer needed 'tf_in_flags' field from ide_task_t.
As the result we finally have the internals of HDIO_DRIVE_TASKFILE ioctl
separated from the core IDE code.
There should be no functionality changes caused by this patch.
Acked-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
* Add 'data_buf' and 'nsect' variables in ide_taskfile_ioctl()
to cache data buffer pointer and number of sectors to transfer
(this allows us to have only one ide_diag_taskfile() call).
* Add IDE_TFLAG_WRITE taskfile flag and use it to check whether
the REQ_RW request flag should be set.
* Move ->command_type handling from ide_diag_taskfile() to
ide_taskfile_ioctl() and use ->req_cmd instead of ->command_type.
* Add 'nsect' parameter to ide_raw_taskfile().
* Merge ide_diag_taskfile() into ide_raw_taskfile().
* Initialize ->data_phase explicitly in idedisk_prepare_flush(),
ide_start_power_step() and ide_disk_special().
* Remove no longer needed 'command_type' field from ide_task_t.
* Add #ifndef/#endif __KERNEL__ to <linux/hdreg.h> around no
longer used by kernel IDE_DRIVE_TASK_* and TASKFILE_* defines.
There should be no functionality changes caused by this patch.
Acked-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Given that:
* hpt366.c::hpt3xx_intrproc() is the only user of hwif->intrproc
* hpt366.c::hpt3xx_quirkproc() sets drive->quirk_list to 1 for quirky drives
which is a value unique to hpt366 host driver
we can remove hwif->intproc and just check for drive->quirk_list == 1
in ide_do_request().
Acked-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Add ide_pktcmd_tf_load() helper and convert ATAPI device drivers to use it.
There should be no functionality changes caused by this patch.
Acked-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Remove atapi_ireason_t.
While at it:
* replace 'HWIF(drive)' by 'drive->hwif' (or just 'hwif' where possible)
v2:
* v1 had CD and IO bits reversed in many places.
* Use CD and IO defines from <linux/hdreg.h>.
v3:
* Fix incorrect "(ireason & IO) == test_bit()". (Noticed by Sergei)
Acked-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Remove ata_nsector_t, ata_data_t (unused) and atapi_bcount_t.
While at it:
* replace 'HWIF(drive)' by 'hwif'
Acked-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Remove atapi_feature_t.
While at it:
* replace 'HWIF(drive)' by 'hwif'
Acked-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Remove atapi_error_t.
While at it:
* replace 'HWIF(drive)' by 'drive->hwif'
v2:
* Add {ILI,EOM,LFS}_ERR defines to <linux/hdreg.h>.
Acked-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Remove ata_status_t (unused) and atapi_status_t.
While at it:
* replace 'HWIF(drive)' by 'drive->hwif' (or just 'hwif' where possible)
Acked-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
special_t is used only internally by the IDE subsystem (it isn't
related to hardware registers and isn't exported to the user-space).
Acked-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Based on the earlier work by Tejun Heo.
All users are gone so we can finally remove it.
Cc: Tejun Heo <htejun@gmail.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
* Add IDE_TFLAG_OUT_DEVICE taskfile flag to indicate the need of writing
the Device register and handle it in ide_tf_load().
Update ide_tf_load() and {do_rw,flagged}_taskfile() users accordingly.
* Use struct ide_taskfile and ide_tf_load() in execute_drive_cmd().
* Make the debugging code dump all taskfile registers for both
REQ_ATA_TYPE_{CMD,TASK} requests and move it to ide_tf_load()
so it also covers REQ_ATA_TYPE_TASKFILE requests.
There should be no functionality changes caused by this patch
(unless DEBUG is defined).
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Remove stale ide.h "configuration options":
* INITIAL_MULT_COUNT - always defined to 0
* SUPPORT_SLOW_DATA_PORTS - unused
* OK_TO_RESET_CONTROLLER - always defined to 1
* DISABLE_IRQ_NOSYNC - always defined to 0
Leave SUPPORT_VLB_SYNC (defined to 0 for CRIS and FRV, otherwise to 1)
for now but disallow overriding it by <asm/ide.h>.
There should be no functionality changes caused by this patch.
Acked-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Based on the earlier work by Tejun Heo.
* Move setting IDE_TFLAG_LBA48 taskfile flag from do_rw_taskfile()
function to the callers.
* Add IDE_TFLAG_FLAGGED taskfile flag for flagged taskfiles coming
from ide_taskfile_ioctl(). Check it instead of ->tf_out_flags.all.
* Add IDE_TFLAG_OUT_DATA taskfile flag to indicate the need to load
IDE data register in ide_tf_load().
* Add IDE_TFLAG_OUT_* taskfile flags to indicate the need to load
particular IDE taskfile registers in ide_tf_load().
* Update do_rw_taskfile() and ide_tf_load() users to set respective
IDE_TFLAG_OUT_* taksfile flags.
* Add task_dma_ok() helper.
* Use IDE_TFLAG_FLAGGED taskfile flag to select HIHI mask in ide_tf_load().
* Use do_rw_taskfile() in flagged_taskfile().
* Remove no longer needed 'tf_out_flags' field from ide_task_t.
There should be no functionality changes caused by this patch.
Cc: Tejun Heo <htejun@gmail.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
* Add ide_no_data_taskfile() helper and convert ide_raw_taskfile() w/ NO DATA
protocol users to use it instead.
* Set ->data_phase explicitly in ide_no_data_taskfile()
(TASKFILE_NO_DATA is defined as 0x0000).
* Unexport task_no_data_intr().
Acked-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Based on the earlier work by Tejun Heo.
* Add 'tf_flags' field (for taskfile flags) to ide_task_t.
* Add IDE_TFLAG_LBA48 taskfile flag for LBA48 taskfiles.
* Add IDE_TFLAG_NO_SELECT_MASK taskfile flag for __ide_do_rw_disk()
which doesn't use SELECT_MASK() (looks like a bug but it requires
some more investigation).
* Split off ide_tf_load() helper from do_rw_taskfile().
* Convert __ide_do_rw_disk() to use ide_tf_load().
There should be no functionality changes caused by this patch.
Cc: Tejun Heo <htejun@gmail.com>
Acked-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
* Don't set write-only ide_task_t.hobRegister[6] and ide_task_t.hobRegister[7]
in idedisk_set_max_address_ext().
* Add struct ide_taskfile and use it in ide_task_t instead of tfRegister[]
and hobRegister[].
* Remove no longer needed IDE_CONTROL_OFFSET_HOB define.
* Add #ifndef/#endif __KERNEL__ around definitions of {task,hob}_struct_t.
While at it:
* Use ATA_LBA define for LBA bit (0x40) as suggested by Tejun Heo.
v2:
* Add missing newlines. (Noticed by Sergei)
* Use ~ATA_LBA instead of 0xBF. (Noticed by Sergei)
* Use unnamed unions for error/feature and status/command.
(Suggested by Sergei).
There should be no functionality changes caused by this patch.
Acked-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Cc: Tejun Heo <htejun@gmail.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Remove task_ioreg_t typedef from the kernel code (but leave it
in <linux/hdreg.h> for #ifndef/#endif __KERNEL__ case).
While at it also move sata_ioreg_t typedef under #ifndef/#endif __KERNEL__.
v2:
Remove name of the second parameter from ide_execute_command() declaration.
(Noticed by Sergei).
Acked-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
* Convert cmd64x, hpt366 and pdc202xx_old host drivers to use
pci_resource_start(hwif->pci_dev, 4) instead of hwif->dma_master.
* Remove no longer needed ->dma_master field from ide_hwif_t.
v2:
* Use the more readable 'hwif->dma_base - (hwif->channel * 8)' instead of
pci_resource_start(hwif->pci_dev, 4).
v3:
* Use hwif->extra_base in hpt366/pdc20xx_old + some cosmetic fixups over v2
(suggested by Sergei).
v4:
* Correct offsets in hpt3xxn_set_clock().
v5:
* Use hwif->extra_base in hpt366 for _real_ this time. (Noticed by Sergei)
Acked-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Cc: Jeff Garzik <jeff@garzik.org>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Right now, the linux kernel (with scheduler statistics enabled) keeps track
of the maximum time a process is waiting to be scheduled. While the maximum
is a very useful metric, tracking average and total is equally useful
(at least for latencytop) to figure out the accumulated effect of scheduler
delays. The accumulated effect is important to judge the performance impact
of scheduler tuning/behavior.
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
LatencyTOP kernel infrastructure; it measures latencies in the
scheduler and tracks it system wide and per process.
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
For some crazy reason (trying to work around hw problem in i810) I wanted
to use HZ around 4000.
Signed-off-by: Pavel Machek <pavel@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Currently all highres=off timers are run from softirq context, but
HRTIMER_CB_IRQSAFE_NO_SOFTIRQ timers expect to run from irq context.
Fix this up by splitting it similar to the highres=on case.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
We need to teach no_hz about the rt throttling because its tick driven.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Extend group scheduling to also cover the realtime classes. It uses the time
limiting introduced by the previous patch to allow multiple realtime groups.
The hard time limit is required to keep behaviour deterministic.
The algorithms used make the realtime scheduler O(tg), linear scaling wrt the
number of task groups. This is the worst case behaviour I can't seem to get out
of, the avg. case of the algorithms can be improved, I focused on correctness
and worst case.
[ akpm@linux-foundation.org: move side-effects out of BUG_ON(). ]
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Very simple time limit on the realtime scheduling classes.
Allow the rq's realtime class to consume sched_rt_ratio of every
sched_rt_period slice. If the class exceeds this quota the fair class
will preempt the realtime class.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Use HR-timers (when available) to deliver an accurate preemption tick.
The regular scheduler tick that runs at 1/HZ can be too coarse when nice
level are used. The fairness system will still keep the cpu utilisation 'fair'
by then delaying the task that got an excessive amount of CPU time but try to
minimize this by delivering preemption points spot-on.
The average frequency of this extra interrupt is sched_latency / nr_latency.
Which need not be higher than 1/HZ, its just that the distribution within the
sched_latency period is important.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Why do we even have cond_resched when real preemption
is on? It seems to be a waste of space and time.
remove cond_resched with CONFIG_PREEMPT on.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Introduce a new rlimit that allows the user to set a runtime timeout on
real-time tasks their slice. Once this limit is exceeded the task will receive
SIGXCPU.
So it measures runtime since the last sleep.
Input and ideas by Thomas Gleixner and Lennart Poettering.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
CC: Lennart Poettering <mzxreary@0pointer.de>
CC: Michael Kerrisk <mtk.manpages@googlemail.com>
CC: Ulrich Drepper <drepper@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Move the task_struct members specific to rt scheduling together.
A future optimization could be to put sched_entity and sched_rt_entity
into a union.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
CC: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
This patch implements a new version of RCU which allows its read-side
critical sections to be preempted. It uses a set of counter pairs
to keep track of the read-side critical sections and flips them
when all tasks exit read-side critical section. The details
of this implementation can be found in this paper -
http://www.rdrop.com/users/paulmck/RCU/OLSrtRCU.2006.08.11a.pdf
and the article-
http://lwn.net/Articles/253651/
This patch was developed as a part of the -rt kernel development and
meant to provide better latencies when read-side critical sections of
RCU don't disable preemption. As a consequence of keeping track of RCU
readers, the readers have a slight overhead (optimizations in the paper).
This implementation co-exists with the "classic" RCU implementations
and can be switched to at compiler.
Also includes RCU tracing summarized in debugfs.
[ akpm@linux-foundation.org: build fixes on non-preempt architectures ]
Signed-off-by: Gautham R Shenoy <ego@in.ibm.com>
Signed-off-by: Dipankar Sarma <dipankar@in.ibm.com>
Signed-off-by: Paul E. McKenney <paulmck@us.ibm.com>
Reviewed-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
This patch re-organizes the RCU code to enable multiple implementations
of RCU. Users of RCU continues to include rcupdate.h and the
RCU interfaces remain the same. This is in preparation for
subsequently merging the preemptible RCU implementation.
Signed-off-by: Gautham R Shenoy <ego@in.ibm.com>
Signed-off-by: Dipankar Sarma <dipankar@in.ibm.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
This patch makes RCU use softirq instead of tasklets.
It also adds a memory barrier after raising the softirq
inorder to ensure that the cpu sees the most recently updated
value of rcu->cur while processing callbacks.
The discussion of the related theoretical race pointed out
by James Huang can be found here --> http://lkml.org/lkml/2007/11/20/603
Signed-off-by: Gautham R Shenoy <ego@in.ibm.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dipankar Sarma <dipankar@in.ibm.com>
Reviewed-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Dmitry Adamushko found that the current implementation of the RT
balancing code left out changes to the sched_setscheduler and
rt_mutex_setprio.
This patch addresses this issue by adding methods to the schedule classes
to handle being switched out of (switched_from) and being switched into
(switched_to) a sched_class. Also a method for changing of priorities
is also added (prio_changed).
This patch also removes some duplicate logic between rt_mutex_setprio and
sched_setscheduler.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
To make the main sched.c code more agnostic to the schedule classes.
Instead of having specific hooks in the schedule code for the RT class
balancing. They are replaced with a pre_schedule, post_schedule
and task_wake_up methods. These methods may be used by any of the classes
but currently, only the sched_rt class implements them.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
We add the notion of a root-domain which will be used later to rescope
global variables to per-domain variables. Each exclusive cpuset
essentially defines an island domain by fully partitioning the member cpus
from any other cpuset. However, we currently still maintain some
policy/state as global variables which transcend all cpusets. Consider,
for instance, rt-overload state.
Whenever a new exclusive cpuset is created, we also create a new
root-domain object and move each cpu member to the root-domain's span.
By default the system creates a single root-domain with all cpus as
members (mimicking the global state we have today).
We add some plumbing for storing class specific data in our root-domain.
Whenever a RQ is switching root-domains (because of repartitioning) we
give each sched_class the opportunity to remove any state from its old
domain and add state to the new one. This logic doesn't have any clients
yet but it will later in the series.
Signed-off-by: Gregory Haskins <ghaskins@novell.com>
CC: Christoph Lameter <clameter@sgi.com>
CC: Paul Jackson <pj@sgi.com>
CC: Simon Derr <simon.derr@bull.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
The current wake-up code path tries to determine if it can optimize the
wake-up to "this_cpu" by computing load calculations. The problem is that
these calculations are only relevant to SCHED_OTHER tasks where load is king.
For RT tasks, priority is king. So the load calculation is completely wasted
bandwidth.
Therefore, we create a new sched_class interface to help with
pre-wakeup routing decisions and move the load calculation as a function
of CFS task's class.
Signed-off-by: Gregory Haskins <ghaskins@novell.com>
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Some RT tasks (particularly kthreads) are bound to one specific CPU.
It is fairly common for two or more bound tasks to get queued up at the
same time. Consider, for instance, softirq_timer and softirq_sched. A
timer goes off in an ISR which schedules softirq_thread to run at RT50.
Then the timer handler determines that it's time to smp-rebalance the
system so it schedules softirq_sched to run. So we are in a situation
where we have two RT50 tasks queued, and the system will go into
rt-overload condition to request other CPUs for help.
This causes two problems in the current code:
1) If a high-priority bound task and a low-priority unbounded task queue
up behind the running task, we will fail to ever relocate the unbounded
task because we terminate the search on the first unmovable task.
2) We spend precious futile cycles in the fast-path trying to pull
overloaded tasks over. It is therefore optimial to strive to avoid the
overhead all together if we can cheaply detect the condition before
overload even occurs.
This patch tries to achieve this optimization by utilizing the hamming
weight of the task->cpus_allowed mask. A weight of 1 indicates that
the task cannot be migrated. We will then utilize this information to
skip non-migratable tasks and to eliminate uncessary rebalance attempts.
We introduce a per-rq variable to count the number of migratable tasks
that are currently running. We only go into overload if we have more
than one rt task, AND at least one of them is migratable.
In addition, we introduce a per-task variable to cache the cpus_allowed
weight, since the hamming calculation is probably relatively expensive.
We only update the cached value when the mask is updated which should be
relatively infrequent, especially compared to scheduling frequency
in the fast path.
Signed-off-by: Gregory Haskins <ghaskins@novell.com>
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
this patch extends the soft-lockup detector to automatically
detect hung TASK_UNINTERRUPTIBLE tasks. Such hung tasks are
printed the following way:
------------------>
INFO: task prctl:3042 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message
prctl D fd5e3793 0 3042 2997
f6050f38 00000046 00000001 fd5e3793 00000009 c06d8264 c06dae80 00000286
f6050f40 f6050f00 f7d34d90 f7d34fc8 c1e1be80 00000001 f6050000 00000000
f7e92d00 00000286 f6050f18 c0489d1a f6050f40 00006605 00000000 c0133a5b
Call Trace:
[<c04883a5>] schedule_timeout+0x6d/0x8b
[<c04883d8>] schedule_timeout_uninterruptible+0x15/0x17
[<c0133a76>] msleep+0x10/0x16
[<c0138974>] sys_prctl+0x30/0x1e2
[<c0104c52>] sysenter_past_esp+0x5f/0xa5
=======================
2 locks held by prctl/3042:
#0: (&sb->s_type->i_mutex_key#5){--..}, at: [<c0197d11>] do_fsync+0x38/0x7a
#1: (jbd_handle){--..}, at: [<c01ca3d2>] journal_start+0xc7/0xe9
<------------------
the current default timeout is 120 seconds. Such messages are printed
up to 10 times per bootup. If the system has crashed already then the
messages are not printed.
if lockdep is enabled then all held locks are printed as well.
this feature is a natural extension to the softlockup-detector (kernel
locked up without scheduling) and to the NMI watchdog (kernel locked up
with IRQs disabled).
[ Gautham R Shenoy <ego@in.ibm.com>: CPU hotplug fixes. ]
[ Andrew Morton <akpm@linux-foundation.org>: build warning fix. ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>