Use the newly introduced ccw request infrastructure to implement
pgid related operations: sense pgid, set pgid and disband pg.
Signed-off-by: Peter Oberparleiter <peter.oberparleiter@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Use the newly introduced ccw request infrastructure to implement
the sense id operation.
Signed-off-by: Peter Oberparleiter <peter.oberparleiter@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Reduce code duplication by introducing a central infrastructure to
perform an internal I/O operation on a CCW device.
Signed-off-by: Peter Oberparleiter <peter.oberparleiter@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Remove the call to BUG() for situations which are unexpected
but do not cause actual problems.
Signed-off-by: Peter Oberparleiter <peter.oberparleiter@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Device recognition needs to be started with the ccw device lock
held to prevent race conditions between I/O starting and interrupt
reception.
Signed-off-by: Peter Oberparleiter <peter.oberparleiter@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Handle verification errors consistently through the existing
callback ccw_device_done to reduce cleanup code duplication.
Signed-off-by: Peter Oberparleiter <peter.oberparleiter@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Remove the return code from ccw_device_recognition and handle
recognition errors through the existing callback
ccw_device_recog_done to reduce cleanup code duplication.
Signed-off-by: Peter Oberparleiter <peter.oberparleiter@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Print a warning message in case a ccw device enters boxed or
not operational state during online/offline processing.
Signed-off-by: Peter Oberparleiter <peter.oberparleiter@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Introduce a central mechanism for performing delayed ccw device work
to ensure that different types of work do not overwrite each other.
Prioritization ensures that the most important work is always
performed while less important tasks are either obsoleted or repeated
later.
Signed-off-by: Peter Oberparleiter <peter.oberparleiter@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Ensure that current and future users of sch->work do not overwrite
each other by introducing a single mechanism for delayed subchannel
work.
Signed-off-by: Peter Oberparleiter <peter.oberparleiter@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Change the initiative to update subchannel-ccw device associations
to the subchannel: when there is an indication that the internal
association no longer reflects the current hardware state, mark
each affected subchannel as requiring attention. Once processing
reaches a subchannel, determine the correct association for that
subchannel at that time and perform the necessary device_move
operations.
This change fixes problems with the previous approach which would
leave devices in an inconsistent state when a new hardware change
occurred while a device_move was already scheduled.
Signed-off-by: Peter Oberparleiter <peter.oberparleiter@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
sch_create_and_recog_new_device() associates a parent subchannel
with its ccw device child even though this is already done by
the subsequently called io_subchannel_recog(). Also make sure
io_subchannel_recog() sets the association under lock.
Signed-off-by: Peter Oberparleiter <peter.oberparleiter@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
io_subchannel_probe() frees memory for sch->private which is later
freed again when io_subchannel_remove() is called. Fix this problem
by removing the cleanup in io_subchannel_probe().
Signed-off-by: Peter Oberparleiter <peter.oberparleiter@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
When the kernel is IPLed without the CLEAR option and switches
to 64-bit, the high-order half of the registers might contain
random values. This can cause addressing exceptions and the
kernel enters an interrupt loop.
Initialize the high-order half of the general purpose registers
with zeros after switching to 64-bit mode.
Cc: <stable@kernel.org>
Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
At least, insn.c and inat.c is needed for kprobe for now. So,
this compile those only if KPROBES is enabled.
Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
LKML-Reference: <878wdg8icq.fsf@devron.myhome.or.jp>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Uses of strcat are almost always signs that someone is too lazy
to think about the code a bit more carefully. One always has to
know about the lengths of the strings involved to avoid buffer
overflows.
This is one case where the size of the object code for me is
reduced by 38 bytes. The code should also be faster, especially
if flags is non-NULL.
Signed-off-by: Ulrich Drepper <drepper@redhat.com>
Cc: a.p.zijlstra@chello.nl
Cc: fweisbec@gmail.com
Cc: jaswinderrajput@gmail.com
Cc: paulus@samba.org
LKML-Reference: <200912061825.nB6IPUa1023306@hs20-bc2-1.build.redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
The 'scripting unsupported' message should only be displayed
when the -s or -g options are used, and not when they aren't, as
the current code does.
Signed-off-by: Tom Zanussi <tzanussi@gmail.com>
Cc: rostedt@goodmis.org
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <1260163919-6679-3-git-send-email-tzanussi@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Whatever the context nature of a breakpoint, we always perform the
following constraint checks before allocating it a slot:
- Check the number of pinned breakpoint bound the concerned cpus
- Check the max number of task-bound breakpoints that are belonging
to a task.
- Add both and see if we have a reamining slot for the new breakpoint
This is the right thing to do when we are about to register a cpu-only
bound breakpoint. But not if we are dealing with a task bound
breakpoint. What we want in this case is:
- Check the number of pinned breakpoint bound the concerned cpus
- Check the number of breakpoints that already belong to the task
in which the breakpoint to register is bound to.
- Add both
This fixes a regression that makes the "firefox -g" command fail to
register breakpoints once we deal with a secondary thread.
Reported-by: Walt <w41ter@gmail.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Prasad <prasad@linux.vnet.ibm.com>
The perf attrs used to set up breakpoint parameters are often allocated
in the stack and not zeroed out before calling hw_breakpoint_init().
Handle it from this helper to avoid random attributes set by the stack.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Prasad <prasad@linux.vnet.ibm.com>
When I added the xs callbacks into perf, I forgot to re-check
the no-libperl case. This patch fixes the undefined reference
error for that.
Reported-by: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Signed-off-by: Tom Zanussi <tzanussi@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1260153712.6564.4.camel@tropicana>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
raw->size is not used, this patch just cleans it up.
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Li Zefan <lizf@cn.fujitsu.com>
LKML-Reference: <4B1C8CC4.4050007@cn.fujitsu.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
We use 'data.raw_data' parameter to call process_raw_event(),
but data.raw_data buffer not include data size. it can make perf
tool crash.
This bug was introduced by commit 180f95e29a ("perf: Make common
SAMPLE_EVENT parser").
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: Eduard - Gabriel Munteanu <eduard.munteanu@linux360.ro>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Li Zefan <lizf@cn.fujitsu.com>
LKML-Reference: <4B1C7F45.5080105@cn.fujitsu.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Utilities to reserve, unreserve and fence a list of TTM
buffer objects in a deadlock-safe manner.
Used by the vmwgfx driver.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This is intended to be used by ttm-aware drivers to
1) Block clients to inactive masters when
they try to validate buffers for GPU use.
2) Optionally block clients to the current master when
there is thrashing due to GPU memory shortage.
Used by the vmwgfx driver.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Add objects needed for user-space to maintain reference counts on ttm objects.
This is used by the vmwgfx driver which allows user-space to maintain
map-counts on dma buffers, lock-counts on the ttm lock and ref-counts on
gpu surfaces, gpu contexts and dma buffer.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This patch fixes three problems in the handling of the
EXT4_IOC_MOVE_EXT ioctl:
1. In current EXT4_IOC_MOVE_EXT, there are read access mode checks for
original and donor files, but they allow the illegal write access to
donor file, since donor file is overwritten by original file data. To
fix this problem, change access mode checks of original (r->r/w) and
donor (r->w) files.
2. Disallow the use of donor files that have a setuid or setgid bits.
3. Call mnt_want_write() and mnt_drop_write() before and after
ext4_move_extents() calling to get write access to a mount.
Signed-off-by: Akira Fujita <a-fujita@rs.jp.nec.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
We cannot rely on buffer dirty bits during fsync because pdflush can come
before fsync is called and clear dirty bits without forcing a transaction
commit. What we do is that we track which transaction has last changed
the inode and which transaction last changed allocation and force it to
disk on fsync.
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Inside ->setattr() call both ATTR_UID and ATTR_GID may be valid
This means that we may end-up with transferring all quotas. Add
we have to reserve QUOTA_DEL_BLOCKS for all quotas, as we do in
case of QUOTA_INIT_BLOCKS.
Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org>
Reviewed-by: Mingming Cao <cmm@us.ibm.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Currently all quota block reservation macros contains hard-coded "2"
aka MAXQUOTAS value. This is no good because in some places it is not
obvious to understand what does this digit represent. Let's introduce
new macro with self descriptive name.
Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org>
Acked-by: Mingming Cao <cmm@us.ibm.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
This fixes a leak of blocks in an inode prealloc list if device failures
cause ext4_mb_mark_diskspace_used() to fail.
Signed-off-by: Curt Wohlgemuth <curtw@google.com>
Acked-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
There is a potential race when a transaction is committing right when
the file system is being umounting. This could reduce in a race
because EXT4_SB(sb)->s_group_info could be freed in ext4_put_super
before the commit code calls a callback so the mballoc code can
release freed blocks in the transaction, resulting in a panic trying
to access the freed s_group_info.
The fix is to wait for the transaction to finish committing before we
shutdown the multiblock allocator.
Signed-off-by: Josef Bacik <josef@redhat.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
When ext4_write_begin fails after allocating some blocks or
generic_perform_write fails to copy data to write, we truncate blocks
already instantiated beyond i_size. Although these blocks were never
inside i_size, we have to truncate the pagecache of these blocks so
that corresponding buffers get unmapped. Otherwise subsequent
__block_prepare_write (called because we are retrying the write) will
find the buffers mapped, not call ->get_block, and thus the page will
be backed by already freed blocks leading to filesystem and data
corruption.
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Add a new config option, CONFIG_EXT4_USE_FOR_EXT23 which if enabled,
will cause ext4 to be used for either ext2 or ext3 file system mounts
when ext2 or ext3 is not enabled in the configuration.
This allows minimalist kernel fanatics to drop to file system drivers
from their compiled kernel with out losing functionality.
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Again we try to put VRAM at 0, and it didn't work on this chipset,
reports of corrupt RAM appeared on irc and bugzilla.
Fix the vram location according to what the BIOS setup, I'm not 100%
sure we don't need the same thing on rs690/rs780/rs880, we probably
should do it there just in case as its what the DDX does.
Signed-off-by: Dave Airlie <airlied@redhat.com>
Set up rs600 gart like r600:
- set gart system aperture to vram
- inside gart system aperture is unmapped*
- outside gart system aperture is mapped*
*mapped refers to memory handled by page tables
Signed-off-by: Alex Deucher <alexdeucher@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Since (e761b77: cpu hotplug, sched: Introduce cpu_active_map and redo
sched domain managment) we have cpu_active_mask which is suppose to rule
scheduler migration and load-balancing, except it never (fully) did.
The particular problem being solved here is a crash in try_to_wake_up()
where select_task_rq() ends up selecting an offline cpu because
select_task_rq_fair() trusts the sched_domain tree to reflect the
current state of affairs, similarly select_task_rq_rt() trusts the
root_domain.
However, the sched_domains are updated from CPU_DEAD, which is after the
cpu is taken offline and after stop_machine is done. Therefore it can
race perfectly well with code assuming the domains are right.
Cure this by building the domains from cpu_active_mask on
CPU_DOWN_PREPARE.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Commit acc3f5d7ca ("cpumask:
Partition_sched_domains takes array of cpumask_var_t") changed
the function signature of generate_sched_domains() for the
CONFIG_SMP=y case, but forgot to update the corresponding
function for the CONFIG_SMP=n case, causing:
kernel/cpuset.c:2073: warning: passing argument 1 of 'generate_sched_domains' from incompatible pointer type
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
LKML-Reference: <alpine.DEB.2.00.0912062038070.5693@ayla.of.borg>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
- util/header.c
"len" is aligned to 64. So, it tries to write the out of
long_name buffer.
So, this use "zero_buf" to write aligned area.
- util/trace-event-read.c
"size" is not including nul byte. So, this allocates it, and set '\0'.
- util/trace-event-parse.c
It needs parens to calc correct size.
Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <87d42s8iiu.fsf_-_@devron.myhome.or.jp>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Currently, sample event data is parsed for each commands, and it
is assuming that the data is not including other data. (E.g.
timechart, trace, etc. can't parse the event if it has
PERF_SAMPLE_CALLCHAIN)
So, even if we record the superset data for multiple commands at
a time, commands can't parse. etc.
To fix it, this makes common sample event parser, and use it to
parse sample event correctly. (PERF_SAMPLE_READ is unsupported
for now though, it seems to be not using.)
Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <87hbs48imv.fsf@devron.myhome.or.jp>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Update "struct trace_entry" to match with current one. And
remove "size" field from it.
If it has "size", it become cause of alignment mismatch of
structure with kernel.
Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Acked-by: Arjan van de Ven <arjan@infradead.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <87ljhg8ioe.fsf@devron.myhome.or.jp>
Signed-off-by: Ingo Molnar <mingo@elte.hu>