When we add a device to an active array it can be meaningful to set
the 'insync' flag. This indicates that the device is in-sync with the
array except for locations recorded in the bitmap.
A bitmap-based recovery can then bring it completely in-sync.
Internally we move that flag to 'saved_raid_disk' but forgot to clear
In_sync like we do in add_new_disk.
So clear In_sync after moving its value to saved_raid_disk.
Reported-by: Andrei Warkentin <andreiw@vmware.com>
Signed-off-by: NeilBrown <neilb@suse.de>
Use 32bit value starting at offset 0x2d for displaying the firmware
version in ethtool. This should work for all current ixgbe HW
Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Stephen Ko <stephen.s.ko@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
x25_find_listener does not check that the amount of call user data given
in the skb is big enough in per-socket comparisons, hence buffer
overreads may occur. Fix this by adding a check.
Signed-off-by: Matthew Daley <mattjd@gmail.com>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Andrew Hendry <andrew.hendry@gmail.com>
Cc: stable <stable@kernel.org>
Acked-by: Andrew Hendry <andrew.hendry@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
There are multiple locations in the X.25 packet layer where a skb is
assumed to be of at least a certain size and that all its data is
currently available at skb->data. These assumptions are not checked,
hence buffer overreads may occur. Use pskb_may_pull to check these
minimal size assumptions and ensure that data is available at skb->data
when necessary, as well as use skb_copy_bits where needed.
Signed-off-by: Matthew Daley <mattjd@gmail.com>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Andrew Hendry <andrew.hendry@gmail.com>
Cc: stable <stable@kernel.org>
Acked-by: Andrew Hendry <andrew.hendry@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
X.25 call user data is being copied in its entirety from incoming messages
without consideration to the size of the destination buffers, leading to
possible buffer overflows. Validate incoming call user data lengths before
these copies are performed.
It appears this issue was noticed some time ago, however nothing seemed to
come of it: see http://www.spinics.net/lists/linux-x25/msg00043.html and
commit 8db09f26f9.
Signed-off-by: Matthew Daley <mattjd@gmail.com>
Acked-by: Eric Dumazet <eric.dumazet@gmail.com>
Tested-by: Andrew Hendry <andrew.hendry@gmail.com>
Cc: stable <stable@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
in6_dev_get(dev) takes a reference on struct inet6_dev, we dont need
rcu locking in ndisc_constructor()
Signed-off-by: Roy.Li <rongqing.li@windriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The BerliOS project, which currently hosts our mailinglist, will
close with the end of the year. Now take the chance and remove all
occurrences of the mailinglist address from the source files.
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Acked-by: Oliver Hartkopp <socketcan@hartkopp.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
While preparing net flow caches, once a fail may cause potential
memory leak , fix it.
Signed-off-by: Huajun Li <huajun.li.lee@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Commit 903ab86d19 of 1 March this year ("udp: Add
lockless transmit path") introduced a new fast TX path that broke the checksum
coverage computation of UDP-lite, which so far depended on up->len (only set
if the socket is locked and 0 in the fast path).
Fixed by providing both fast- and slow-path computation of checksum coverage.
The latter can be removed when UDP(-lite)v6 also uses a lockless transmit path.
Reported-by: Thomas Volkert <thomas@homer-conferencing.com>
Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
The tcp_end field is not actually used by the hardware, so there
is no need to set it.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add GRO support to the ehea driver.
v3:
[cascardo] no need to enable GRO, since it's enabled by default
[cascardo] vgrp was removed in the vlan cleanup
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In preparation for adding GRO to ehea, remove LRO.
v3:
[cascardo] fixed conflict with vlan cleanup
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Switch to using ndo_get_stats64 to get 64bit statistics.
v3:
[cascardo] use rtnl_link_stats64 as port stats
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The queue macros are many levels deep and it makes it harder to
work your way through them when many of the versions are unused.
Remove the unused versions.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
If a nonlinear skb fits within the immediate area, use skb_copy_bits
instead of copying the frags by hand.
v3:
[cascardo] fixed conflict with use of skb frag API
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
write_swqe2_TSO and write_swqe2_nonTSO are almost identical.
For TSO we have to set the TSO and mss bits in the wqe and we only
put the header in the immediate area, no data. Collapse both
functions into write_swqe2_immediate.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Based on a patch from Michael Ellerman, clean up a significant
portion of the transmit path. There was a lot of duplication here.
Even worse, we were always checksumming tx packets and ignoring the
skb->ip_summed field.
Also remove NETIF_F_FRAGLIST from dev->features, I'm not sure why
it was enabled.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The ehea adapter has a mode where it will avoid partial cacheline DMA
writes on receive by always padding packets to fall on a cacheline
boundary.
Unfortunately we currently aren't allocating enough space for a full
ethernet MTU packet to be rounded up, so this optimisation doesn't hit.
It's unfortunate that the next largest packet size exposed by the
hypervisor interface is 2kB, meaning our skb allocation comes out of a
4kB SLAB. However the performance increase due to this optimisation is
quite large and my TCP stream numbers increase from 900MB to 1000MB/sec.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
We weren't enabling any VLAN features so we missed out on checksum
offload and TSO when using VLANs. Enable them.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
It seems like the ehea xmit routine and an ethtool change of TSO
mode could race, resulting in corrupt packets. Checking gso_size
is enough and we can use the helper function.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The num_tx_qps module option allows a user to configure a different
number of tx and rx queues. Now the networking stack is multiqueue
aware it makes little sense just to enable the tx queues and not the
rx queues so remove the option.
v3:
[cascardo] fixed conflict with get_stats change
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
commit 18604c5485 (ehea: NAPI multi queue TX/RX path for SMP) added
driver specific logic for exiting napi mode. I'm not sure what it was
trying to solve and it should be up to the network stack to decide when
we are done polling so remove it.
v3:
[cascardo] Fixed extra parentheses.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The ehea driver had some multiqueue support but was missing the last
few years of networking stack improvements:
- Use skb_record_rx_queue to record which queue an skb came in on.
- Remove the driver specific netif_queue lock and use the networking
stack transmit lock instead.
- Remove the driver specific transmit queue hashing and use
skb_get_queue_mapping instead.
- Use netif_tx_{start|stop|wake}_queue where appropriate. We can also
remove pr->queue_stopped and just check the queue status directly.
- Print all 16 queues in the ethtool stats.
We now enable multiqueue by default since it is a clear win on all my
testing so far.
v3:
[cascardo] fixed use_mcs parameter description
[cascardo] set ehea_ethtool_stats_keys as const
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Remove the deprecated NETIF_F_LLTX feature. Since the network stack
now provides the locking we can remove the driver specific
pr->xmit_lock.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Previously, the device-driver matching mechanism depended on the
vme_device_id structure due to the need for a bind table per driver.
This method of matching is no longer used so this patch merges the
fields of struct vme_device_id into struct vme_dev. Since this also
renders the slot field meaningless, it has also been removed in this
patch.
Signed-off-by: Manohar Vanga <manohar.vanga@cern.ch>
Cc: Martyn Welch <martyn.welch@ge.com>
Reviewed-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
For jumper based boards (non VME64x), there is no mechanism
for detecting the card that is plugged into a specific slot. This
leads to issues in non-autodiscovery crates/cards when a card is
plugged into a slot that is "claimed" by a different driver. In
reality, there is no problem, but the driver rejects such a
configuration due to its dependence on the concept of slots.
This patch makes the concept of slots less critical and pushes the
driver match() to individual drivers (similar to what happens in the
ISA bus in driver/base/isa.c). This allows drivers to register the
number of devices that they expect without any restrictions. Devices
in this new model are now formatted as $driver_name-$bus_id.$device_id
(as compared to the earlier vme-$bus_id.$slot_number).
This model also makes the device model more logical as devices
are only registered when they actually exist whereas earlier,
a set of devices were being created automatically regardless of
them actually being there.
Another change introduced in this patch is that devices are now created
within the VME driver structure rather than in the VME bridge structure.
This way, things don't go haywire if the bridge driver is removed while
a driver is using it.
Signed-off-by: Manohar Vanga <manohar.vanga@cern.ch>
Cc: Martyn Welch <martyn.welch@ge.com>
Reviewed-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Instead of using a vanilla 'struct device' for VME devices, add new
'struct vme_dev'. Modifications have been made to the VME framework
API as well as all in-tree VME drivers.
The new vme_dev structure has the following advantages from the
current model used by the driver:
* Driver functions (probe, remove) now receive a VME device
instead of a pointer to the bridge device (cleaner design)
* It's easier to differenciate API calls as bridge-based or
device-based (ie. cleaner interface).
Signed-off-by: Manohar Vanga <manohar.vanga@cern.ch>
Cc: Martyn Welch <martyn.welch@ge.com>
Reviewed-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Very simple buffered reading. Did not provide a trigger as
the sysfs trigger already meets that requirement.
Signed-off-by: Jonathan Cameron <jic23@cam.ac.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The event generator is not very pretty but does the job and
allows this driver to look a lot more like a normal driver
than it otherwise would.
Signed-off-by: Jonathan Cameron <jic23@cam.ac.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The documenation explaining how to go about writing a driver is lagging
horribly, so here is another approach; an actual driver with
lots of explanatory comments.
Note it is currently minimal in that there are no events and no
buffer. With care they can probably be added in additional files
without messing up the clarity of what we have here.
V2: Addressed some of Manuel Stahl's feedback.
Fixed up kernel doc.
Added more general description.
Signed-off-by: Jonathan Cameron <jic23@cam.ac.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Fix a dumb lack of consideration of the effect of combining
the iio_device_unregister and iio_free_device calls into
one. There is no valid place to free some of the sysfs
array elements.
Signed-off-by: Jonathan Cameron <jic23@cam.ac.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Numerous drivers either had pointless includes of gpio.h
or should have been dependent on GENERIC_GPIO and were not.
Conversion of ads1210 to use array registration triggered
build failures that highlighted all was not well.
Signed-off-by: Jonathan Cameron <jic23@cam.ac.uk>
Reported-by: Randy Dunlap <rdunlap@xenotime.net>
Acked-by: Randy Dunlap <rdunlap@xenotime.net>
Acked-by: Michael Hennerich <michael.hennerich@analog.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
zcache_do_preload() currently does a spin_trylock() on the
zcache_direct_reclaim_lock. Holding this lock intends to prevent
shrink_zcache_memory() from evicting zbud pages as a result
of a preload.
However, it also prevents two threads from
executing zcache_do_preload() at the same time. The first
thread will obtain the lock and the second thread's spin_trylock()
will fail (an aborted preload) causing the page to be either lost
(cleancache) or pushed out to the swap device (frontswap). It
also doesn't ensure that the call to shrink_zcache_memory() is
on the same thread as the call to zcache_do_preload().
Additional, there is no need for this mechanism because all
zcache_do_preload() calls that come down from cleancache already
have PF_MEMALLOC set in the process flags which prevents
direct reclaim in the memory manager. If the zcache_do_preload()
call is done from the frontswap path, we _want_ reclaim to be
done (which it isn't right now).
This patch removes the zcache_direct_reclaim_lock and related
statistics in zcache.
Based on v3.1-rc8
Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Reviewed-by: Dave Hansen <dave@linux.vnet.ibm.com>
Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This fixes a compilation error in nvec.c due to the missing module.h include.
Signed-off-by: Marc Dietrich <marvin24@gmx.de>
Cc: Julian Andres Klode <jak@jak-linux.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
timeval[0-9] were not used or used in a ready only code
so we can remove them safely and so the code
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Setting the key management scheme is done in SIOCSIWAUTH, so
no need to do anything in SIOCSIWGENIE.
Fix up function name.
Signed-off-by: David Kilroy <kilroyd@googlemail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Handle more cases in IW_AUTH.
Avoid reporting errors (invalid parameter) on operations that we
can't do anything with.
Return -EINPROGRESS from some operations to get wpa_supplicant to
batch and commit changes.
In other operations apply the changes immediately.
Avoid writing WEP keys from the commit handler when WEP is not
being used.
Accept WPA_VERSION_DISABLED, which is received from wpa_supplicant
during WEP.
Signed-off-by: David Kilroy <kilroyd@googlemail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Share logic between encodeext and encode, so that we can handle
subtle differences between them (implied set_tx), and clear the
appropriate keys if you attempt to switch straight from WPA to
WEP and vice versa.
Also reinstate the TX buffer flush, and ensure the key index is
written to the card little endian.
Signed-off-by: David Kilroy <kilroyd@googlemail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Report the IE using the appropriate event instead of a custom one.
Signed-off-by: David Kilroy <kilroyd@googlemail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
These macros don't map to anything different. Just remove them.
Signed-off-by: David Kilroy <kilroyd@googlemail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>