This is especially important in cases where the kernel allocs a new
structure and expects a field to be set from a netlink attribute. If such
attribute is shorter than expected, the rest of the field is left containing
previous data. When such field is read back by the user space, kernel memory
content is leaked.
Signed-off-by: Jiri Benc <jbenc@redhat.com>
Acked-by: Thomas Graf <tgraf@suug.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jiri Benc says:
====================
netlink: access functions for IP address attributes
There are many places that read or write IP addresses to netlink attributes.
With IPv6 addresses, every such place currently has to use generic nla_put
and nla_memcpy. Implementing IPv6 address access functions simplify things
and makes the code more intelligible. IPv4 address access functions has
lesser value but it would be better to be consistent between IPv6 and IPv4
and they still serve as documentation.
The conversion is straightforward and the resulting patches are not that
large, thus I kept all the changes in the patches that introduce the access
functions. If anyone prefers to split the definition of access functions and
the conversion and/or break it out by network protocols, please let me know.
While doing the conversion, I came across ugly typecasting in
inetpeer_addr_base and xfrm_address_t when dealing with IPv6 addresses.
Instead of introducing more of this, I cleaned it up. Those are the first
two patches, serving as a prerequisite to the latter two.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Those are counterparts to nla_put_in_addr and nla_put_in6_addr.
Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
IP addresses are often stored in netlink attributes. Add generic functions
to do that.
For nla_put_in_addr, it would be nicer to pass struct in_addr but this is
not used universally throughout the kernel, in way too many places __be32 is
used to store IPv4 address.
Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In many places, the a6 field is typecasted to struct in6_addr. As the
fields are in union anyway, just add in6_addr type to the union and
get rid of the typecasting.
Modifying the uapi header is okay, the union has still the same size.
Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In many places, the a6 field is typecasted to struct in6_addr. As the
fields are in union anyway, just add in6_addr type to the union and get rid
of the typecasting.
Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Some lines in vxlan code are indented by 7 spaces instead of a tab.
Fixes: e4c7ed4153 ("vxlan: add ipv6 support")
Signed-off-by: Jiri Benc <jbenc@redhat.com>
Acked-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ian Morris says:
====================
ipv6: coding style - comparisons with NULL
The following patches address some coding style issues only. No
functional changes and no changes detected by objdiff.
The IPV6 code uses multiple different styles when comparing with NULL
(I.e. x == NULL and !x as well as x != NULL and x). Generally the
latter form is preferred according to checkpatch and so this changes
aligns the code to this style.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
The ipv6 code uses a mixture of coding styles. In some instances check for NULL
pointer is done as x != NULL and sometimes as x. x is preferred according to
checkpatch and this patch makes the code consistent by adopting the latter
form.
No changes detected by objdiff.
Signed-off-by: Ian Morris <ipm@chirality.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
The ipv6 code uses a mixture of coding styles. In some instances check for NULL
pointer is done as x == NULL and sometimes as !x. !x is preferred according to
checkpatch and this patch makes the code consistent by adopting the latter
form.
No changes detected by objdiff.
Signed-off-by: Ian Morris <ipm@chirality.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Before commit 3900f29021 ("bonding: slight
optimizztion for bond_slave_override()") the override logic was to send packets
with non-zero queue_id through the slave with corresponding queue_id, under two
conditions only - if the slave can transmit and it's up.
The above mentioned commit changed this logic by introducing an additional
condition - whether the bond is active (indirectly, using the slave_can_tx and
later - bond_is_active_slave), that prevents the user from implementing more
complex policies according to the Documentation/networking/bonding.txt.
Signed-off-by: Anton Nayshtut <anton@swortex.com>
Signed-off-by: Alexey Bogoslavsky <alexey@swortex.com>
Signed-off-by: Andy Gospodarek <gospo@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Document the Smack bringup features. Update the proper location for
mounting smackfs from /smack to /sys/fs/smackfs. Fix some spelling errors.
Suggest the use of the load2 interface instead of the load interface.
Signed-off-by: Casey Schaufler <casey@schaufler-ca.com>
Yuval Mintz says:
====================
bnx2x: link and protection changes
This patch series contains 2 small additions to link configuration,
as well as a safeguard against loading the device on a hardware at
a failed state.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
It's possible that due to errors [either on PCI or on device itself]
registers reads would fail, returning all-Fs.
This adds a check as early as possible so that driver will not read junk
values and make incorrect probe decisions according to them; instead,
gracefully fail the probe.
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: Ariel Elior <Ariel.Elior@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Number of link changes are now being stored in shared memory [by all possible
link owners], for management use [as well as possible debug information for
dumps].
Signed-off-by: Yaniv Rosner <Yaniv.Rosner@qlogic.com>
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: Ariel Elior <Ariel.Elior@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Enable controlling Post2, coeff, IPreDriver and IFir according to NVRAM setup.
Signed-off-by: Yaniv Rosner <Yaniv.Rosner@qlogic.com>
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: Ariel Elior <Ariel.Elior@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jiri Benc says:
====================
ipvlan: list corruption and rcu fixes
This patch set fixes different issues leading to corrupted lists and
incorrect rcu usage.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
When an ipvlan interface is down, its addresses are not on the hash list.
Fix checks for existence of addresses not to depend on the hash list, walk
through all interface addresses instead.
Signed-off-by: Jiri Benc <jbenc@redhat.com>
Acked-by: Mahesh Bandewar <maheshb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Adding and removing to the 'ipvlans' list is already done using _rcu list
operations.
Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When ipvlan interface with IP addresses attached is brought down and then
deleted, the assigned addresses are deleted twice from the address hash
list, first on the interface down and second on the link deletion.
Similarly, when an address is added while the interface is down, it is added
second time once the interface is brought up.
When the interface is down, the addresses should be kept off the hash list
for performance reasons. Ensure this is true, which also fixes the double add
problem. To fix the double free, check whether the address is hashed before
removing it.
Reported-by: Dan Williams <dcbw@redhat.com>
Signed-off-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: Mahesh Bandewar <maheshb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Mahesh Bandewar <maheshb@google.com>
Signed-off-by: Andy Gospodarek <gospo@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
While fixing a recent issue I noticed that we are doing some unnecessary
work inside the loop for ip_fib_net_exit. As such I am pulling out the
initialization to NULL for the locally stored fib_local, fib_main, and
fib_default.
In addition I am restoring the original code for flushing the table as
there is no need to split up the fib_table_flush and hlist_del work since
the code for packing the tnodes with multiple key vectors was dropped.
Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This fixes the following warning:
BUG: sleeping function called from invalid context at mm/slub.c:1268
in_atomic(): 1, irqs_disabled(): 0, pid: 6, name: kworker/u8:0
INFO: lockdep is turned off.
CPU: 3 PID: 6 Comm: kworker/u8:0 Tainted: G W 4.0.0-rc5+ #895
Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
Workqueue: netns cleanup_net
0000000000000006 ffff88011953fa68 ffffffff81a203b6 000000002c3a2c39
ffff88011952a680 ffff88011953fa98 ffffffff8109daf0 ffff8801186c6aa8
ffffffff81fbc9e5 00000000000004f4 0000000000000000 ffff88011953fac8
Call Trace:
[<ffffffff81a203b6>] dump_stack+0x4c/0x65
[<ffffffff8109daf0>] ___might_sleep+0x1c3/0x1cb
[<ffffffff8109db70>] __might_sleep+0x78/0x80
[<ffffffff8117a60e>] slab_pre_alloc_hook+0x31/0x8f
[<ffffffff8117d4f6>] __kmalloc+0x69/0x14e
[<ffffffff818ed0e1>] ? kzalloc.constprop.20+0xe/0x10
[<ffffffff818ed0e1>] kzalloc.constprop.20+0xe/0x10
[<ffffffff818ef622>] fib_trie_table+0x27/0x8b
[<ffffffff818ef6bd>] fib_trie_unmerge+0x37/0x2a6
[<ffffffff810b06e1>] ? arch_local_irq_save+0x9/0xc
[<ffffffff818e9793>] fib_unmerge+0x2d/0xb3
[<ffffffff818f5f56>] fib4_rule_delete+0x1f/0x52
[<ffffffff817f1c3f>] ? fib_rules_unregister+0x30/0xb2
[<ffffffff817f1c8b>] fib_rules_unregister+0x7c/0xb2
[<ffffffff818f64a1>] fib4_rules_exit+0x15/0x18
[<ffffffff818e8c0a>] ip_fib_net_exit+0x23/0xf2
[<ffffffff818e91f8>] fib_net_exit+0x32/0x36
[<ffffffff817c8352>] ops_exit_list+0x45/0x57
[<ffffffff817c8d3d>] cleanup_net+0x13c/0x1cd
[<ffffffff8108b05d>] process_one_work+0x255/0x4ad
[<ffffffff8108af69>] ? process_one_work+0x161/0x4ad
[<ffffffff8108b4b1>] worker_thread+0x1cd/0x2ab
[<ffffffff8108b2e4>] ? process_scheduled_works+0x2f/0x2f
[<ffffffff81090686>] kthread+0xd4/0xdc
[<ffffffff8109ec8f>] ? local_clock+0x19/0x22
[<ffffffff810905b2>] ? __kthread_parkme+0x83/0x83
[<ffffffff81a2c0c8>] ret_from_fork+0x58/0x90
[<ffffffff810905b2>] ? __kthread_parkme+0x83/0x83
The issue was that as a part of exiting the default rules were being
deleted which resulted in the local trie being unmerged. By moving the
freeing of the FIB tables up we can avoid the unmerge since there is no
local table left when we call the fib4_rules_exit function.
Fixes: 0ddcf43d5d ("ipv4: FIB Local/MAIN table collapse")
Reported-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
On Armada 38x SoCs, under heavy I/O load, the system hangs when CPU
Idle is enabled. Waiting for a solution to this issue, this patch
disables the CPU Idle support for this SoC.
As CPU Hot plug support also uses some of the CPU Idle functions it is
also affected by the same issue. This patch disables it also for the
Armada 38x SoCs.
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Tested-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Cc: <stable@vger.kernel.org> # v3.17 +
Johan Hedberg says:
====================
pull request: bluetooth-next 2015-03-27
Here's another set of Bluetooth & 802.15.4 patches for 4.1:
- New API to control LE advertising data (i.e. peripheral role)
- mac802154 & at86rf230 cleanups
- Support for toggling quirks from debugfs (useful for testing)
- Memory leak fix for LE scanning
- Extra version info reading support for Broadcom controllers
Please let me know if there are any issues pulling. Thanks.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Usually the admin queue depth of 64 is plenty, but for some use cases we
really need it larger. Examples are use cases like MAT, where you have
to touch all of NAND for init/format like purposes. In those cases, we
see a good 2x increase with an increased queue depth.
Signed-off-by: Jens Axboe <axboe@fb.com>
Acked-by: Keith Busch <keith.busch@intel.com>
PRP list calculation is supposed to be based on device's page size.
Systems with page size larger than device's page size cause corruption
to the name space as well as system memory with out this fix.
Systems like x86 might not experience this issue because it uses
PAGE_SIZE of 4K where as powerpc uses PAGE_SIZE of 64k while NVMe device's
page size varies depending upon the vendor.
Signed-off-by: Murali Iyer <mniyer@us.ibm.com>
Signed-off-by: Brian King <brking@linux.vnet.ibm.com>
Acked-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
The driver may issue commands to a device that may never return, so its
request_queue could always have active requests while the controller is
running. Waiting for the queue to freeze could block forever, which is
what blk-mq's hot cpu notification handler was doing when nvme drives
were in use.
This has the nvme driver make the asynchronous event command's tag
reserved and does not keep the request active. We can't have more than
one since the request is released back to the request_queue before the
command is completed. Having only one avoids potential tag collisions,
and reserving the tag for this purpose prevents other admin tasks from
reusing the tag.
I also couldn't think of a scenario where issuing AEN requests single
depth is worse than issuing them in batches, so I don't think we lose
anything with this change.
As an added bonus, doing it this way removes "Cancelling I/O" warnings
observed when unbinding the nvme driver from a device.
Reported-by: Yigal Korman <yigal@plexistor.com>
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
This fixes a race accessing an invalid address when a controller's admin
queue is in use during a reset for failure or hot removal occurs. The
admin queue will be frozen to prevent new users from entering prior to
the doorbell queue being unmapped.
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Fixed two warnings in e1000e and igb, when switching to timespec64
some printf formats started to not match. In theses cases actually
the new type is __kernel_time_t which is __kernel_long_t which
unfortunately can be either "long" or "long long". So to solve
this I cases the arguments to "long long". -DaveM
Richard Cochran says:
====================
ptp: get ready for 2038
This series converts the core driver methods of the PTP Hardware Clock
(PHC) subsystem to use the 64 bit version of the timespec structure,
making the core API ready for the year 2038.
In addition, I reviewed how each driver and device represents the time
value at the hardware register level. Most of the drivers are ready,
but a few will need some work before the year 2038, as shown:
Patch Driver
------------------------------------------------
12 drivers/net/ethernet/intel/igb/igb_ptp.c
15 ? drivers/net/ethernet/sfc/ptp.c
16 drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c
The commit log messages document how each driver is ready or why it is
not ready. For patch 15, I could not easily find out the hardware
representation of the time value, and so the SFC maintainers will have
to review their low level code in order to resolve any remaining
issues.
* ChangeLog
** V3
- dp83640: use timespec64 throughout per Arnd's suggestion
- tilegx: use timespec64 throughout per Chris' suggestion
- add Jeff's acked-bys
** V2
- use the new methods in the posix clock code right away (patch #3)
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
The current implementation of bitbang_txrx_be_cpha0 and
bitbang_txrx_be_cpha1 always call setmosi. That runs into several
unnecessary calls into the gpiolib when the level of the GPIO actually
has not to be changed.
This patch changes the routines to remember the last GPIO level
and only calls setmosi if an change has to be made. This
way it improves the transfer throughput.
Signed-off-by: Michael Grzeschik <m.grzeschik@pengutronix.de>
Signed-off-by: Mark Brown <broonie@kernel.org>
POSIX says that exit takes an unsigned integer between 0 and 255, so
using -1 doesn't work on POSIX shells.
There is already a well-defined failure code, $FAIL (1), so use that.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Shuah Khan <shuahkh@osg.samsung.com>
If the stack tracer (CONFIG_STACK_TRACER) is disabled, the
fgraph-filter-stack test blows chunks:
[8] ftrace - function graph filters with stack tracer [FAIL]
+ reset_tracer
+ echo nop
./ftracetest: 19: /home/michael/selftests/ftrace/test.d/ftrace/fgraph-filter-stack.tc:
cannot create /proc/sys/kernel/stack_tracer_enabled: Directory nonexistent
Fix it by checking if the proc file exists before echoing to it. With
the patch applied it fails correctly with:
[8] ftrace - function graph filters with stack tracer [UNSUPPORTED]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Shuah Khan <shuahkh@osg.samsung.com>
Use the normal return values for bool functions
Signed-off-by: Joe Perches <joe@perches.com>
Message-Id: <9f593eb2f43b456851cd73f7ed09654ca58fb570.1427759009.git.joe@perches.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
DM multipath is the only caller of blk_lld_busy() -- which calls a
queue's lld_busy_fn hook. Request-based DM doesn't support stacking
multipath devices so there is no reason to register the lld_busy_fn hook
on a multipath device's queue using blk_queue_lld_busy().
As such, remove functions dm_lld_busy and dm_table_any_busy_target.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
__dm_get_module_param() could be useful for future DM module parameters
besides those related to "reserved_ios".
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Writeback takes out a lock on the cache block, so will increase the
latency for any concurrent io.
This patch works by placing 2 sentinel objects on each level of the
multiqueues. Every WRITEBACK_PERIOD the oldest sentinel gets moved to
the newest end of the queue level.
When looking for writeback work:
if less than 25% of the cache is clean:
we select the oldest object with the lowest hit count
otherwise:
we select the oldest object that is not past a writeback sentinel.
Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
A sentinel object is placed on each level of the multiqueues. When an
object is hit it is requeued behind the sentinel. When the tick is
incremented we iterate through all objects behind the sentinel and
update the hit_count, then reposition the sentinel at the very back.
This saves memory by avoiding tracking the tick explicitly for every
struct entry object in the multiqueues.
Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
queue_shift_down() didn't adjust the hit_counts to the new levels, so it
just had the effect of scrambling levels.
Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Small optimisation, now queue_empty() doesn't need to walk all levels of
the multiqueue.
Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Use a single slab cache to allocate a mempool for each dirty-log.
This _should_ eliminate DM's need for io_schedule_timeout() in
mempool_alloc(); so io_schedule() should be sufficient now.
Also, rename struct flush_entry to dm_dirty_log_flush_entry to allow
KMEM_CACHE() to create a meaningful global name for the slab cache.
Also, eliminate some holes in struct log_c by rearranging members.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Acked-by: Heinz Mauelshagen <heinzm@redhat.com>
All of the PHC drivers have been converted to the new methods. This patch
converts the three remaining callers within the core code and removes the
older methods for good. As a result, the core PHC code is ready for the
year 2038. However, some of the PHC drivers are not quite ready yet.
Signed-off-by: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The device has a 64 bit clock register, where each clock tick is 32
nanoseconds, and so with this patch the driver is ready for the year
2038.
Compile tested only.
Signed-off-by: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The device has a 64 bit clock register, where each clock tick is 16
nanoseconds, and so with this patch the driver is ready for the year
2038.
Signed-off-by: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This device stores the number of seconds in a 32 bit register, and the
stored value is unsigned. Therefore this driver and device are ready
for the year 2038. However, more work will be needed prior to 2106.
Compile tested only.
Signed-off-by: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>