The same function is used by idmap, gss and blocklayout code. Make it
generic.
Signed-off-by: Peng Tao <peng_tao@emc.com>
Signed-off-by: Jim Rees <rees@umich.edu>
Cc: stable@kernel.org [3.0]
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Make the status field explicitly 32 bits. "...it's unlikely that the kernel
and userspace would differ on the size of an int here, but it might be a
good idea to go ahead and make that explicitly 32 bits in case we end up
dealing with more exotic arches at some point in the future."
Suggested-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Jim Rees <rees@umich.edu>
Signed-off-by: Benny Halevy <bhalevy@tonian.com>
Cc: stable@kernel.org [3.0]
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Always return PTR_ERR, not NULL, from nfs4_blk_get_deviceinfo and
nfs4_blk_decode_device.
Check for IS_ERR, not NULL, in bl_set_layoutdriver when calling
nfs4_blk_get_deviceinfo.
Signed-off-by: Jim Rees <rees@umich.edu>
Signed-off-by: Benny Halevy <bhalevy@tonian.com>
Cc: stable@kernel.org [3.0]
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
nfs_find_and_lock_request will take a reference to the nfs_page and
will then put it if the req is already locked. It's possible though
that the reference will be the last one. That put then can kick off
a whole series of reference puts:
nfs_page
nfs_open_context
dentry
inode
If the inode ends up being deleted, then the VFS will call
truncate_inode_pages. That function will try to take the page lock, but
it was already locked when migrate_page was called. The code
deadlocks.
Fix this by simply refusing the migration request if PagePrivate is
already set, indicating that the page is already associated with an
active read or write request.
We've had a customer test a backported version of this patch and
the preliminary results seem good.
Cc: stable@kernel.org
Cc: Andrea Arcangeli <aarcange@redhat.com>
Reported-by: Harshula Jayasuriya <harshula@redhat.com>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
The result from ipv6_addr_scope() always not be a single SCOPE,
so we can't use equal to compare the result with IPV6_ADDR_SCOPE_LINKLOCAL
at nfs_sockaddr_match_ipaddr6.
This patch fixs the problem, and lets checking address before scope_id.
Signed-off-by: Mi Jinlong <mijinlong@cn.fujitsu.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
commit 420e3646 allowed the kernel to reduce the number of unnecessary
commit calls by skipping the commit when there are a large number of
outstanding pages.
However, the current test in nfs_commit_unstable_pages does not handle
the edge condition properly. When ncommit == 0, then that means that the
kernel doesn't need to do anything more for the inode. The current test
though in the WB_SYNC_NONE case will return true, and the inode will end
up being marked dirty. Once that happens the inode will never be clean
until there's a WB_SYNC_ALL flush.
Fix this by immediately returning from nfs_commit_unstable_pages when
ncommit == 0.
Mike noticed this problem initially in RHEL5 (2.6.18-based kernel) which
has a backported version of 420e3646. The inode cache there was growing
very large. The inode cache was unable to be shrunk since the inodes
were all marked dirty. Calling sync() would essentially "fix" the
problem -- the WB_SYNC_ALL flush would result in the inodes all being
marked clean.
What I'm not clear on is how big a problem this is in mainline kernels
as the writeback code there is very different. Either way, it seems
incorrect to re-mark the inode dirty in this case.
Reported-by: Mike McLean <mikem@redhat.com>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Cc: stable@kernel.org [2.6.34+]
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
This reverts commit b80c3cb628.
The reverted commit was rendered obsolete by a VFS fix: commit
5547e8aac6 (writeback: Update dirty flags in
two steps). We now no longer need to worry about writeback_single_inode()
missing our marking the inode for COMMIT in 'do_writepages()' call.
Reverting this patch, fixes a performance regression in which the inode
would continuously get queued to the dirty list, causing the writeback
code to unnecessarily try to send a COMMIT.
Signed-off-by: Trond Myklebust <Trond.Myklebust>
Tested-by: Simon Kirby <sim@hostway.ca>
Cc: stable@kernel.org [2.6.35+]
Forgot to update simple_transaction_set() to take terminator
character into account.
Signed-off-by: Jarkko Sakkinen <jarkko.j.sakkinen@gmail.com>
Signed-off-by: Casey Schaufler <cschaufler@cschaufler-intel.(none)>
There's a lock inversion between the cputimer->lock and rq->lock;
notably the two callchains involved are:
update_rlimit_cpu()
sighand->siglock
set_process_cpu_timer()
cpu_timer_sample_group()
thread_group_cputimer()
cputimer->lock
thread_group_cputime()
task_sched_runtime()
->pi_lock
rq->lock
scheduler_tick()
rq->lock
task_tick_fair()
update_curr()
account_group_exec()
cputimer->lock
Where the first one is enabling a CLOCK_PROCESS_CPUTIME_ID timer, and
the second one is keeping up-to-date.
This problem was introduced by e8abccb719 ("posix-cpu-timers: Cure
SMP accounting oddities").
Cure the problem by removing the cputimer->lock and rq->lock nesting,
this leaves concurrent enablers doing duplicate work, but the time
wasted should be on the same order otherwise wasted spinning on the
lock and the greater-than assignment filter should ensure we preserve
monotonicity.
Reported-by: Dave Jones <davej@redhat.com>
Reported-by: Simon Kirby <sim@hostway.ca>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: stable@kernel.org
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Link: http://lkml.kernel.org/r/1318928713.21167.4.camel@twins
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Settings in this table reflect the physical panel/connector rather
than the internal dig encoding.
v2: fix typo for DRM_MODE_CONNECTOR_VGA case.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
It's handled via external clock. It should already be protected
by the external ss flag, but add an explicit check just in case.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
llano has fully routeable dig encoders similar to DCE3.2 while
ontario has a hardcoded mapping similar to DCE4.0.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This is patch for Conexant codec of Intel HDA driver, adding new quirk
for Lenovo Thinkpad T520 and W520. Conexant autodetection works fine for
T520 (similar subsystem ID is used also in W520 model) and detects more
mixer features compared to generic (fallback) Lenovo quirk with
hardcoded options in Conexant codec.
Patch was activelly tested with Linux 3.0.4, 3.0.6 and 3.0.7 without any
problems.
Signed-off-by: Daniel Suchy <danny@danysek.cz>
Cc: <stable@kernel.org> [3.0+]
Signed-off-by: Takashi Iwai <tiwai@suse.de>
The previous fix for the position-buffer check gives yet another
regression on a Dell laptop. The safest fix right now is to add a
static quirk for this device (and better to apply it for stable
kernels too).
Reported-by: Éric Piel <Eric.Piel@tremplin-utc.net>
Cc: <stable@kernel.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
When compiling an i386_defconfig kernel with
gcc-4.6.1-9.fc15.i686, I noticed a warning about the asm operand
for test_bit in kprobes' can_boost. I discovered that this
caused only the first long of twobyte_is_boostable[] to be
output.
Jakub filed and fixed gcc PR50571 to correct the warning and
this output issue. But to solve it for less current gcc, we can
make kprobes' twobyte_is_boostable[] volatile, and it won't be
optimized out.
Before:
CC arch/x86/kernel/kprobes.o
In file included from include/linux/bitops.h:22:0,
from include/linux/kernel.h:17,
from [...]/arch/x86/include/asm/percpu.h:44,
from [...]/arch/x86/include/asm/current.h:5,
from [...]/arch/x86/include/asm/processor.h:15,
from [...]/arch/x86/include/asm/atomic.h:6,
from include/linux/atomic.h:4,
from include/linux/mutex.h:18,
from include/linux/notifier.h:13,
from include/linux/kprobes.h:34,
from arch/x86/kernel/kprobes.c:43:
[...]/arch/x86/include/asm/bitops.h: In function ‘can_boost.part.1’:
[...]/arch/x86/include/asm/bitops.h:319:2: warning: use of memory input without lvalue in asm operand 1 is deprecated [enabled by default]
$ objdump -rd arch/x86/kernel/kprobes.o | grep -A1 -w bt
551: 0f a3 05 00 00 00 00 bt %eax,0x0
554: R_386_32 .rodata.cst4
$ objdump -s -j .rodata.cst4 -j .data arch/x86/kernel/kprobes.o
arch/x86/kernel/kprobes.o: file format elf32-i386
Contents of section .data:
0000 48000000 00000000 00000000 00000000 H...............
Contents of section .rodata.cst4:
0000 4c030000 L...
Only a single long of twobyte_is_boostable[] is in the object
file.
After, with volatile:
$ objdump -rd arch/x86/kernel/kprobes.o | grep -A1 -w bt
551: 0f a3 05 20 00 00 00 bt %eax,0x20
554: R_386_32 .data
$ objdump -s -j .rodata.cst4 -j .data arch/x86/kernel/kprobes.o
arch/x86/kernel/kprobes.o: file format elf32-i386
Contents of section .data:
0000 48000000 00000000 00000000 00000000 H...............
0010 00000000 00000000 00000000 00000000 ................
0020 4c030000 0f000200 ffff0000 ffcff0c0 L...............
0030 0000ffff 3bbbfff8 03ff2ebb 26bb2e77 ....;.......&..w
Now all 32 bytes are output into .data instead.
Signed-off-by: Josh Stone <jistone@redhat.com>
Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Cc: Jakub Jelinek <jakub@redhat.com>
Link: http://lkml.kernel.org/r/1318899645-4068-1-git-send-email-jistone@redhat.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Create common extern definitions of _rambase, _ramstart and _ramend
instead of them being externed when used in code.
Signed-off-by: Greg Ungerer <gerg@uclinux.org>
We do not need to have local extern declarations of memory_start and
memory_end in mm/init_no.c. There are declarations already in asm/page_no.h.
Signed-off-by: Greg Ungerer <gerg@uclinux.org>
We should be including and using sections.h to get at the extern
definitions of the linker sections in the m68knommu mm init code.
Not defining them locally.
Signed-off-by: Greg Ungerer <gerg@uclinux.org>
We should be including and using sections.h to get at the extern
definitions of the linker sections in the m68knommu startup code.
Not defining them locally.
Signed-off-by: Greg Ungerer <gerg@uclinux.org>
The code for handling traps in the non-mmu case is a subset of the mmu
enabled case. Merge the non-mmu traps_no.c code back to a single traps.c.
There is actually no code mmu specific here at all, and the processor
specific code (for the more complex 68020/68030/68040/68060) is already
proplerly conditionaly used.
The format of console exception dump is a little different, but I don't
think will cause any one problems, it is purely for debug purposes.
Signed-off-by: Greg Ungerer <gerg@uclinux.org>
Most of the trap.c code is general to all m68k arch members. But the code
it currently contains to set the hardware vector table is quite specific to
the 680x0 family. They can have the vector table at any address unlike
other family members (which either support only a single fixed address,
or a limited range of addresses). So lets move that code out to a new file,
vectors.c. This will make sharing the rest of the trap.c code easier and
cleaner.
Signed-off-by: Greg Ungerer <gerg@uclinux.org>
The changes in the mmu version of entry.h (entry_mm.h) and the non-mmu
version (entry_no.h) are not about the presence or use of an MMU at all.
The main changes are to support the ColdFire processors. The code for
trap entry and exit for all types of 68k processor outside coldfire is
the same.
So merge the files back to a single entry.h and share the common 68k
entry/exit code. Some changes are required for the non-mmu entry
handlers to adopt the differing macros for system call and interrupt
entry, but this is quite strait forward. The changes for the ColdFire
remove a couple of instructions for the separate a7 register case, and
are no worse for the older single a7 register case.
Signed-off-by: Greg Ungerer <gerg@uclinux.org>
The few differences between the mmu and non-mmu kernel/Makefiles can
easily be handled inside of a single Makefile. Merge the 2 back into
a single Makefile.
Signed-off-by: Greg Ungerer <gerg@uclinux.org>
Acked-by: Sam Ravnborg <sam@ravnborg.org>
Most of the build logic is the same for the mmu and non-mmu m68k targets.
Merge the top level architecture Makefiles back into a single Makefile.
For the most part this is just adding the non-mmu processor types and
their specific cflags and other options into the mmu Makefile.
Note that all the BOARD setting logic that was in the non-mmu Makefile
is completely removed. It was no longer being used at all.
This has been build and run tested on ColdFire targets and ARAnyM.
It has been build tested on all the m68k defconfig targets using a
gcc-4.5.1 based toolchain.
Signed-off-by: Greg Ungerer <gerg@uclinux.org>
Acked-by: Sam Ravnborg <sam@ravnborg.org>
The current mmu and non-mmu Kconfig files can be merged to form
a more general selection of options. The current break up of options
is due to the simple brute force merge from the m68k and m68knommu
arch directories.
Many of the options are not at all specific to having the MMU enabled
or not. They are actually associated with a particular CPU type or
platform type.
Ultimately as we support all processors with the MMU disabled we need
many of these options to be selectable without the MMU option enabled.
And likewise some of the ColdFire processors, which currently are only
supported with the MMU disabled, do have MMU hardware, and will need
to have options selected on CPU type, not MMU disabled.
This patch removes the old mmu and non-mmu Kconfigs and instead breaks
up the configuration into four areas: cpu, machine, bus, devices.
The Kconfig.cpu lists all the options associated with selecting a CPU,
and includes options specific to each CPU type as well.
Kconfig.machine lists all options associated with selecting a machine
type. Almost always the machines selectable is restricted by the chosen
CPU.
Kconfig.bus contains options associated with selecting bus types on the
various machine types. That includes PCI bus, PCMCIA bus, etc.
Kconfig.devices contains options for drivers and driver associated
options.
Signed-off-by: Greg Ungerer <gerg@uclinux.org>
The problem has its root in the calculation of the set-port offsets (macro
MCFGPIO_SETR() in arch/m68k/include/asm/gpio.h), this assumes that all ports
have the same offset from the base port address (MCFGPIO_SETR) which is
defined in mcf520xsim.h as an alias of MCFGIO_PSETR_BUSCTL. Because the BUSCTL
and BE port do not have a set-register (see MCF5208 Reference Manual Page
13-10, Table 13-3) the offset calculations went wrong.
Because the BE and BUSCTL port do not seem useful in these parts, as they
lack a set register, I removed them and adapted the gpio chip bases which
are also used for the offset-calculations. Now both setting and resetting
the chip selects works as expected from userland and from the kernelspace.
Signed-off-by: Peter Turczak <peter@turczak.de>
Signed-off-by: Greg Ungerer <gerg@uclinux.org>
The original 68000 processors cannot copy 16bit or larger quantities from
odd addresses. All newer members of the 68k family (including ColdFire)
can do this.
In the current memcpy implementation after trying to align the destination
address to a 16bit boundary if we end up with an odd source address we go
off and try to copy multi-byte quantities from it. This will trap on the
68000.
The only solution if we end with an odd source address is to byte wise
copy the whole memcpy region. We only need to do this if we are supporting
original 68000 processors.
Signed-off-by: Greg Ungerer <gerg@uclinux.org>
If an incremental recovery was interrupted, a subsequent
re-add will result in a full recovery, even though an
incremental should be possible (seen with raid1).
Solve this problem by not updating the superblock on the
recovering device until array is not degraded any longer.
Cc: Neil Brown <neilb@suse.de>
Signed-off-by: Andrei Warkentin <andreiw@vmware.com>
Signed-off-by: NeilBrown <neilb@suse.de>
When we add a device to an active array it can be meaningful to set
the 'insync' flag. This indicates that the device is in-sync with the
array except for locations recorded in the bitmap.
A bitmap-based recovery can then bring it completely in-sync.
Internally we move that flag to 'saved_raid_disk' but forgot to clear
In_sync like we do in add_new_disk.
So clear In_sync after moving its value to saved_raid_disk.
Reported-by: Andrei Warkentin <andreiw@vmware.com>
Signed-off-by: NeilBrown <neilb@suse.de>
Use 32bit value starting at offset 0x2d for displaying the firmware
version in ethtool. This should work for all current ixgbe HW
Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Stephen Ko <stephen.s.ko@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
x25_find_listener does not check that the amount of call user data given
in the skb is big enough in per-socket comparisons, hence buffer
overreads may occur. Fix this by adding a check.
Signed-off-by: Matthew Daley <mattjd@gmail.com>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Andrew Hendry <andrew.hendry@gmail.com>
Cc: stable <stable@kernel.org>
Acked-by: Andrew Hendry <andrew.hendry@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
There are multiple locations in the X.25 packet layer where a skb is
assumed to be of at least a certain size and that all its data is
currently available at skb->data. These assumptions are not checked,
hence buffer overreads may occur. Use pskb_may_pull to check these
minimal size assumptions and ensure that data is available at skb->data
when necessary, as well as use skb_copy_bits where needed.
Signed-off-by: Matthew Daley <mattjd@gmail.com>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Andrew Hendry <andrew.hendry@gmail.com>
Cc: stable <stable@kernel.org>
Acked-by: Andrew Hendry <andrew.hendry@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
X.25 call user data is being copied in its entirety from incoming messages
without consideration to the size of the destination buffers, leading to
possible buffer overflows. Validate incoming call user data lengths before
these copies are performed.
It appears this issue was noticed some time ago, however nothing seemed to
come of it: see http://www.spinics.net/lists/linux-x25/msg00043.html and
commit 8db09f26f9.
Signed-off-by: Matthew Daley <mattjd@gmail.com>
Acked-by: Eric Dumazet <eric.dumazet@gmail.com>
Tested-by: Andrew Hendry <andrew.hendry@gmail.com>
Cc: stable <stable@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
in6_dev_get(dev) takes a reference on struct inet6_dev, we dont need
rcu locking in ndisc_constructor()
Signed-off-by: Roy.Li <rongqing.li@windriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The BerliOS project, which currently hosts our mailinglist, will
close with the end of the year. Now take the chance and remove all
occurrences of the mailinglist address from the source files.
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Acked-by: Oliver Hartkopp <socketcan@hartkopp.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
While preparing net flow caches, once a fail may cause potential
memory leak , fix it.
Signed-off-by: Huajun Li <huajun.li.lee@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Commit 903ab86d19 of 1 March this year ("udp: Add
lockless transmit path") introduced a new fast TX path that broke the checksum
coverage computation of UDP-lite, which so far depended on up->len (only set
if the socket is locked and 0 in the fast path).
Fixed by providing both fast- and slow-path computation of checksum coverage.
The latter can be removed when UDP(-lite)v6 also uses a lockless transmit path.
Reported-by: Thomas Volkert <thomas@homer-conferencing.com>
Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
The tcp_end field is not actually used by the hardware, so there
is no need to set it.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add GRO support to the ehea driver.
v3:
[cascardo] no need to enable GRO, since it's enabled by default
[cascardo] vgrp was removed in the vlan cleanup
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In preparation for adding GRO to ehea, remove LRO.
v3:
[cascardo] fixed conflict with vlan cleanup
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Switch to using ndo_get_stats64 to get 64bit statistics.
v3:
[cascardo] use rtnl_link_stats64 as port stats
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>