- ACPICA update to upstream revision 20150515 including basic
support for ACPI 6 features: new ACPI tables introduced by
ACPI 6 (STAO, XENV, WPBT, NFIT, IORT), changes related to the
other tables (DTRM, FADT, LPIT, MADT), new predefined names
(_BTH, _CR3, _DSD, _LPI, _MTL, _PRR, _RDI, _RST, _TFP, _TSN),
fixes and cleanups (Bob Moore, Lv Zheng).
- ACPI device power management core code update to follow ACPI 6
which reflects the ACPI device power management implementation
in Windows (Rafael J Wysocki).
- Rework of the backlight interface selection logic to reduce the
number of kernel command line options and improve the handling
of DMI quirks that may be involved in that and to make the
code generally more straightforward (Hans de Goede).
- Fixes for the ACPI Embedded Controller (EC) driver related to
the handling of EC transactions (Lv Zheng).
- Fix for a regression related to the ACPI resources management
and resulting from a recent change of ACPI initialization code
ordering (Rafael J Wysocki).
- Fix for a system initialization regression related to ACPI
introduced during the 3.14 cycle and caused by running the
code that switches the platform over to the ACPI mode too
early in the initialization sequence (Rafael J Wysocki).
- Support for the ACPI _CCA device configuration object related
to DMA cache coherence (Suravee Suthikulpanit).
- ACPI/APEI fixes and cleanups (Jiri Kosina, Borislav Petkov).
- ACPI battery driver cleanups (Luis Henriques, Mathias Krause).
- ACPI processor driver cleanups (Hanjun Guo).
- Cleanups and documentation update related to the ACPI device
properties interface based on _DSD (Rafael J Wysocki).
- ACPI device power management fixes (Rafael J Wysocki).
- Assorted cleanups related to ACPI (Dominik Brodowski. Fabian
Frederick, Lorenzo Pieralisi, Mathias Krause, Rafael J Wysocki).
- Fix for a long-standing issue causing General Protection Faults
to be generated occasionally on return to user space after resume
from ACPI-based suspend-to-RAM on 32-bit x86 (Ingo Molnar).
- Fix to make the suspend core code return -EBUSY consistently in
all cases when system suspend is aborted due to wakeup detection
(Ruchi Kandoi).
- Support for automated device wakeup IRQ handling allowing drivers
to make their PM support more starightforward (Tony Lindgren).
- New tracepoints for suspend-to-idle tracing and rework of the
prepare/complete callbacks tracing in the PM core (Todd E Brandt,
Rafael J Wysocki).
- Wakeup sources framework enhancements (Jin Qian).
- New macro for noirq system PM callbacks (Grygorii Strashko).
- Assorted cleanups related to system suspend (Rafael J Wysocki).
- cpuidle core cleanups to make the code more efficient (Rafael J
Wysocki).
- powernv/pseries cpuidle driver update (Shilpasri G Bhat).
- cpufreq core fixes related to CPU online/offline that should
reduce the overhead of these operations quite a bit, unless the
CPU in question is physically going away (Viresh Kumar, Saravana
Kannan).
- Serialization of cpufreq governor callbacks to avoid race
conditions in some cases (Viresh Kumar).
- intel_pstate driver fixes and cleanups (Doug Smythies, Prarit
Bhargava, Joe Konno).
- cpufreq driver (arm_big_little, cpufreq-dt, qoriq) updates (Sudeep
Holla, Felipe Balbi, Tang Yuantian).
- Assorted cleanups in cpufreq drivers and core (Shailendra Verma,
Fabian Frederick, Wang Long).
- New Device Tree bindings for representing Operating Performance
Points (Viresh Kumar).
- Updates for the common clock operations support code in the PM
core (Rajendra Nayak, Geert Uytterhoeven).
- PM domains core code update (Geert Uytterhoeven).
- Intel Knights Landing support for the RAPL (Running Average Power
Limit) power capping driver (Dasaratharaman Chandramouli).
- Fixes related to the floor frequency setting on Atom SoCs in the
RAPL power capping driver (Ajay Thomas).
- Runtime PM framework documentation update (Ben Dooks).
- cpupower tool fix (Herton R Krzesinski).
/
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQIcBAABCAAGBQJViJdWAAoJEILEb/54YlRx/9gP/3gHoFevNRycvn0VpKqdufCI
Mxy2LBBLlfyW2uD3+NvqvA2WWSo0Cs/LgXa04eAVxPdU7k48s8w+54U23wSouzjW
gfwAmuHxzDR8v0h8X3h6BxNzmkIQHtmDcQlA/cZdHejY/UUw01yxRGNUUZDNbxlm
WXn2nmlBLmGqXTYq0fpBV+3jicUghJqHHsBCqa3VR2yQioHMJG01F4UZMqYTZunN
OIvDUghxByKz6alzdCqlLl1Y0exV6vwWUAzBsl1qHqmHu/bWFSZn3ujNNVrjqHhw
Kl7/8dC2pQkv3Zo3gEVvfQ0onotwWZxGHzPQRdvmxvRnBunQVCi/wynx90yABX/r
PPb/iBNV0mZskbF0zb0GZT3ZZWGA8Z0p3o5JQv2jV4m62qTzx8w50Y5kbn9N1WT+
5bre7AVbVAlGonWszcS9iE+6TOboRz9OD1CCwPFXHItFutlBkau+1hHfFoLM0o9n
LhpGuyszT/EUa1BHkLzuCckFqO2DpbF3N2CKmuTekw0CdgdsvRL2pRByuerk3j7R
WQhlcvBq5YH6j43AuoEZKp8r1iN8oG/iqlrMYQaYWrW9hJaoQOoU8dGJxp/e7gKN
r/qeYjETI+tIsjCbtH5WQzzxDI3gPISAYAtfqs7G34EEo+Lwp6kyRUAF4kDot2V3
ZIyuKMmTu4cdwDETr/O+
=7jTj
-----END PGP SIGNATURE-----
Merge tag 'pm+acpi-4.2-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull power management and ACPI updates from Rafael Wysocki:
"The rework of backlight interface selection API from Hans de Goede
stands out from the number of commits and the number of affected
places perspective. The cpufreq core fixes from Viresh Kumar are
quite significant too as far as the number of commits goes and because
they should reduce CPU online/offline overhead quite a bit in the
majority of cases.
From the new featues point of view, the ACPICA update (to upstream
revision 20150515) adding support for new ACPI 6 material to ACPICA is
the one that matters the most as some new significant features will be
based on it going forward. Also included is an update of the ACPI
device power management core to follow ACPI 6 (which in turn reflects
the Windows' device PM implementation), a PM core extension to support
wakeup interrupts in a more generic way and support for the ACPI _CCA
device configuration object.
The rest is mostly fixes and cleanups all over and some documentation
updates, including new DT bindings for Operating Performance Points.
There is one fix for a regression introduced in the 4.1 cycle, but it
adds quite a number of lines of code, it wasn't really ready before
Thursday and you were on vacation, so I refrained from pushing it on
the last minute for 4.1.
Specifics:
- ACPICA update to upstream revision 20150515 including basic support
for ACPI 6 features: new ACPI tables introduced by ACPI 6 (STAO,
XENV, WPBT, NFIT, IORT), changes related to the other tables (DTRM,
FADT, LPIT, MADT), new predefined names (_BTH, _CR3, _DSD, _LPI,
_MTL, _PRR, _RDI, _RST, _TFP, _TSN), fixes and cleanups (Bob Moore,
Lv Zheng).
- ACPI device power management core code update to follow ACPI 6
which reflects the ACPI device power management implementation in
Windows (Rafael J Wysocki).
- rework of the backlight interface selection logic to reduce the
number of kernel command line options and improve the handling of
DMI quirks that may be involved in that and to make the code
generally more straightforward (Hans de Goede).
- fixes for the ACPI Embedded Controller (EC) driver related to the
handling of EC transactions (Lv Zheng).
- fix for a regression related to the ACPI resources management and
resulting from a recent change of ACPI initialization code ordering
(Rafael J Wysocki).
- fix for a system initialization regression related to ACPI
introduced during the 3.14 cycle and caused by running the code
that switches the platform over to the ACPI mode too early in the
initialization sequence (Rafael J Wysocki).
- support for the ACPI _CCA device configuration object related to
DMA cache coherence (Suravee Suthikulpanit).
- ACPI/APEI fixes and cleanups (Jiri Kosina, Borislav Petkov).
- ACPI battery driver cleanups (Luis Henriques, Mathias Krause).
- ACPI processor driver cleanups (Hanjun Guo).
- cleanups and documentation update related to the ACPI device
properties interface based on _DSD (Rafael J Wysocki).
- ACPI device power management fixes (Rafael J Wysocki).
- assorted cleanups related to ACPI (Dominik Brodowski, Fabian
Frederick, Lorenzo Pieralisi, Mathias Krause, Rafael J Wysocki).
- fix for a long-standing issue causing General Protection Faults to
be generated occasionally on return to user space after resume from
ACPI-based suspend-to-RAM on 32-bit x86 (Ingo Molnar).
- fix to make the suspend core code return -EBUSY consistently in all
cases when system suspend is aborted due to wakeup detection (Ruchi
Kandoi).
- support for automated device wakeup IRQ handling allowing drivers
to make their PM support more starightforward (Tony Lindgren).
- new tracepoints for suspend-to-idle tracing and rework of the
prepare/complete callbacks tracing in the PM core (Todd E Brandt,
Rafael J Wysocki).
- wakeup sources framework enhancements (Jin Qian).
- new macro for noirq system PM callbacks (Grygorii Strashko).
- assorted cleanups related to system suspend (Rafael J Wysocki).
- cpuidle core cleanups to make the code more efficient (Rafael J
Wysocki).
- powernv/pseries cpuidle driver update (Shilpasri G Bhat).
- cpufreq core fixes related to CPU online/offline that should reduce
the overhead of these operations quite a bit, unless the CPU in
question is physically going away (Viresh Kumar, Saravana Kannan).
- serialization of cpufreq governor callbacks to avoid race
conditions in some cases (Viresh Kumar).
- intel_pstate driver fixes and cleanups (Doug Smythies, Prarit
Bhargava, Joe Konno).
- cpufreq driver (arm_big_little, cpufreq-dt, qoriq) updates (Sudeep
Holla, Felipe Balbi, Tang Yuantian).
- assorted cleanups in cpufreq drivers and core (Shailendra Verma,
Fabian Frederick, Wang Long).
- new Device Tree bindings for representing Operating Performance
Points (Viresh Kumar).
- updates for the common clock operations support code in the PM core
(Rajendra Nayak, Geert Uytterhoeven).
- PM domains core code update (Geert Uytterhoeven).
- Intel Knights Landing support for the RAPL (Running Average Power
Limit) power capping driver (Dasaratharaman Chandramouli).
- fixes related to the floor frequency setting on Atom SoCs in the
RAPL power capping driver (Ajay Thomas).
- runtime PM framework documentation update (Ben Dooks).
- cpupower tool fix (Herton R Krzesinski)"
* tag 'pm+acpi-4.2-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (194 commits)
cpuidle: powernv/pseries: Auto-promotion of snooze to deeper idle state
x86: Load __USER_DS into DS/ES after resume
PM / OPP: Add binding for 'opp-suspend'
PM / OPP: Allow multiple OPP tables to be passed via DT
PM / OPP: Add new bindings to address shortcomings of existing bindings
ACPI: Constify ACPI device IDs in documentation
ACPI / enumeration: Document the rules regarding the PRP0001 device ID
ACPI / video: Make acpi_video_unregister_backlight() private
acpi-video-detect: Remove old API
toshiba-acpi: Port to new backlight interface selection API
thinkpad-acpi: Port to new backlight interface selection API
sony-laptop: Port to new backlight interface selection API
samsung-laptop: Port to new backlight interface selection API
msi-wmi: Port to new backlight interface selection API
msi-laptop: Port to new backlight interface selection API
intel-oaktrail: Port to new backlight interface selection API
ideapad-laptop: Port to new backlight interface selection API
fujitsu-laptop: Port to new backlight interface selection API
eeepc-laptop: Port to new backlight interface selection API
dell-wmi: Port to new backlight interface selection API
...
517 lines
13 KiB
C
517 lines
13 KiB
C
/*
|
|
* linux/kernel/time/tick-common.c
|
|
*
|
|
* This file contains the base functions to manage periodic tick
|
|
* related events.
|
|
*
|
|
* Copyright(C) 2005-2006, Thomas Gleixner <tglx@linutronix.de>
|
|
* Copyright(C) 2005-2007, Red Hat, Inc., Ingo Molnar
|
|
* Copyright(C) 2006-2007, Timesys Corp., Thomas Gleixner
|
|
*
|
|
* This code is licenced under the GPL version 2. For details see
|
|
* kernel-base/COPYING.
|
|
*/
|
|
#include <linux/cpu.h>
|
|
#include <linux/err.h>
|
|
#include <linux/hrtimer.h>
|
|
#include <linux/interrupt.h>
|
|
#include <linux/percpu.h>
|
|
#include <linux/profile.h>
|
|
#include <linux/sched.h>
|
|
#include <linux/module.h>
|
|
#include <trace/events/power.h>
|
|
|
|
#include <asm/irq_regs.h>
|
|
|
|
#include "tick-internal.h"
|
|
|
|
/*
|
|
* Tick devices
|
|
*/
|
|
DEFINE_PER_CPU(struct tick_device, tick_cpu_device);
|
|
/*
|
|
* Tick next event: keeps track of the tick time
|
|
*/
|
|
ktime_t tick_next_period;
|
|
ktime_t tick_period;
|
|
|
|
/*
|
|
* tick_do_timer_cpu is a timer core internal variable which holds the CPU NR
|
|
* which is responsible for calling do_timer(), i.e. the timekeeping stuff. This
|
|
* variable has two functions:
|
|
*
|
|
* 1) Prevent a thundering herd issue of a gazillion of CPUs trying to grab the
|
|
* timekeeping lock all at once. Only the CPU which is assigned to do the
|
|
* update is handling it.
|
|
*
|
|
* 2) Hand off the duty in the NOHZ idle case by setting the value to
|
|
* TICK_DO_TIMER_NONE, i.e. a non existing CPU. So the next cpu which looks
|
|
* at it will take over and keep the time keeping alive. The handover
|
|
* procedure also covers cpu hotplug.
|
|
*/
|
|
int tick_do_timer_cpu __read_mostly = TICK_DO_TIMER_BOOT;
|
|
|
|
/*
|
|
* Debugging: see timer_list.c
|
|
*/
|
|
struct tick_device *tick_get_device(int cpu)
|
|
{
|
|
return &per_cpu(tick_cpu_device, cpu);
|
|
}
|
|
|
|
/**
|
|
* tick_is_oneshot_available - check for a oneshot capable event device
|
|
*/
|
|
int tick_is_oneshot_available(void)
|
|
{
|
|
struct clock_event_device *dev = __this_cpu_read(tick_cpu_device.evtdev);
|
|
|
|
if (!dev || !(dev->features & CLOCK_EVT_FEAT_ONESHOT))
|
|
return 0;
|
|
if (!(dev->features & CLOCK_EVT_FEAT_C3STOP))
|
|
return 1;
|
|
return tick_broadcast_oneshot_available();
|
|
}
|
|
|
|
/*
|
|
* Periodic tick
|
|
*/
|
|
static void tick_periodic(int cpu)
|
|
{
|
|
if (tick_do_timer_cpu == cpu) {
|
|
write_seqlock(&jiffies_lock);
|
|
|
|
/* Keep track of the next tick event */
|
|
tick_next_period = ktime_add(tick_next_period, tick_period);
|
|
|
|
do_timer(1);
|
|
write_sequnlock(&jiffies_lock);
|
|
update_wall_time();
|
|
}
|
|
|
|
update_process_times(user_mode(get_irq_regs()));
|
|
profile_tick(CPU_PROFILING);
|
|
}
|
|
|
|
/*
|
|
* Event handler for periodic ticks
|
|
*/
|
|
void tick_handle_periodic(struct clock_event_device *dev)
|
|
{
|
|
int cpu = smp_processor_id();
|
|
ktime_t next = dev->next_event;
|
|
|
|
tick_periodic(cpu);
|
|
|
|
#if defined(CONFIG_HIGH_RES_TIMERS) || defined(CONFIG_NO_HZ_COMMON)
|
|
/*
|
|
* The cpu might have transitioned to HIGHRES or NOHZ mode via
|
|
* update_process_times() -> run_local_timers() ->
|
|
* hrtimer_run_queues().
|
|
*/
|
|
if (dev->event_handler != tick_handle_periodic)
|
|
return;
|
|
#endif
|
|
|
|
if (!clockevent_state_oneshot(dev))
|
|
return;
|
|
for (;;) {
|
|
/*
|
|
* Setup the next period for devices, which do not have
|
|
* periodic mode:
|
|
*/
|
|
next = ktime_add(next, tick_period);
|
|
|
|
if (!clockevents_program_event(dev, next, false))
|
|
return;
|
|
/*
|
|
* Have to be careful here. If we're in oneshot mode,
|
|
* before we call tick_periodic() in a loop, we need
|
|
* to be sure we're using a real hardware clocksource.
|
|
* Otherwise we could get trapped in an infinite
|
|
* loop, as the tick_periodic() increments jiffies,
|
|
* which then will increment time, possibly causing
|
|
* the loop to trigger again and again.
|
|
*/
|
|
if (timekeeping_valid_for_hres())
|
|
tick_periodic(cpu);
|
|
}
|
|
}
|
|
|
|
/*
|
|
* Setup the device for a periodic tick
|
|
*/
|
|
void tick_setup_periodic(struct clock_event_device *dev, int broadcast)
|
|
{
|
|
tick_set_periodic_handler(dev, broadcast);
|
|
|
|
/* Broadcast setup ? */
|
|
if (!tick_device_is_functional(dev))
|
|
return;
|
|
|
|
if ((dev->features & CLOCK_EVT_FEAT_PERIODIC) &&
|
|
!tick_broadcast_oneshot_active()) {
|
|
clockevents_switch_state(dev, CLOCK_EVT_STATE_PERIODIC);
|
|
} else {
|
|
unsigned long seq;
|
|
ktime_t next;
|
|
|
|
do {
|
|
seq = read_seqbegin(&jiffies_lock);
|
|
next = tick_next_period;
|
|
} while (read_seqretry(&jiffies_lock, seq));
|
|
|
|
clockevents_switch_state(dev, CLOCK_EVT_STATE_ONESHOT);
|
|
|
|
for (;;) {
|
|
if (!clockevents_program_event(dev, next, false))
|
|
return;
|
|
next = ktime_add(next, tick_period);
|
|
}
|
|
}
|
|
}
|
|
|
|
/*
|
|
* Setup the tick device
|
|
*/
|
|
static void tick_setup_device(struct tick_device *td,
|
|
struct clock_event_device *newdev, int cpu,
|
|
const struct cpumask *cpumask)
|
|
{
|
|
ktime_t next_event;
|
|
void (*handler)(struct clock_event_device *) = NULL;
|
|
|
|
/*
|
|
* First device setup ?
|
|
*/
|
|
if (!td->evtdev) {
|
|
/*
|
|
* If no cpu took the do_timer update, assign it to
|
|
* this cpu:
|
|
*/
|
|
if (tick_do_timer_cpu == TICK_DO_TIMER_BOOT) {
|
|
if (!tick_nohz_full_cpu(cpu))
|
|
tick_do_timer_cpu = cpu;
|
|
else
|
|
tick_do_timer_cpu = TICK_DO_TIMER_NONE;
|
|
tick_next_period = ktime_get();
|
|
tick_period = ktime_set(0, NSEC_PER_SEC / HZ);
|
|
}
|
|
|
|
/*
|
|
* Startup in periodic mode first.
|
|
*/
|
|
td->mode = TICKDEV_MODE_PERIODIC;
|
|
} else {
|
|
handler = td->evtdev->event_handler;
|
|
next_event = td->evtdev->next_event;
|
|
td->evtdev->event_handler = clockevents_handle_noop;
|
|
}
|
|
|
|
td->evtdev = newdev;
|
|
|
|
/*
|
|
* When the device is not per cpu, pin the interrupt to the
|
|
* current cpu:
|
|
*/
|
|
if (!cpumask_equal(newdev->cpumask, cpumask))
|
|
irq_set_affinity(newdev->irq, cpumask);
|
|
|
|
/*
|
|
* When global broadcasting is active, check if the current
|
|
* device is registered as a placeholder for broadcast mode.
|
|
* This allows us to handle this x86 misfeature in a generic
|
|
* way. This function also returns !=0 when we keep the
|
|
* current active broadcast state for this CPU.
|
|
*/
|
|
if (tick_device_uses_broadcast(newdev, cpu))
|
|
return;
|
|
|
|
if (td->mode == TICKDEV_MODE_PERIODIC)
|
|
tick_setup_periodic(newdev, 0);
|
|
else
|
|
tick_setup_oneshot(newdev, handler, next_event);
|
|
}
|
|
|
|
void tick_install_replacement(struct clock_event_device *newdev)
|
|
{
|
|
struct tick_device *td = this_cpu_ptr(&tick_cpu_device);
|
|
int cpu = smp_processor_id();
|
|
|
|
clockevents_exchange_device(td->evtdev, newdev);
|
|
tick_setup_device(td, newdev, cpu, cpumask_of(cpu));
|
|
if (newdev->features & CLOCK_EVT_FEAT_ONESHOT)
|
|
tick_oneshot_notify();
|
|
}
|
|
|
|
static bool tick_check_percpu(struct clock_event_device *curdev,
|
|
struct clock_event_device *newdev, int cpu)
|
|
{
|
|
if (!cpumask_test_cpu(cpu, newdev->cpumask))
|
|
return false;
|
|
if (cpumask_equal(newdev->cpumask, cpumask_of(cpu)))
|
|
return true;
|
|
/* Check if irq affinity can be set */
|
|
if (newdev->irq >= 0 && !irq_can_set_affinity(newdev->irq))
|
|
return false;
|
|
/* Prefer an existing cpu local device */
|
|
if (curdev && cpumask_equal(curdev->cpumask, cpumask_of(cpu)))
|
|
return false;
|
|
return true;
|
|
}
|
|
|
|
static bool tick_check_preferred(struct clock_event_device *curdev,
|
|
struct clock_event_device *newdev)
|
|
{
|
|
/* Prefer oneshot capable device */
|
|
if (!(newdev->features & CLOCK_EVT_FEAT_ONESHOT)) {
|
|
if (curdev && (curdev->features & CLOCK_EVT_FEAT_ONESHOT))
|
|
return false;
|
|
if (tick_oneshot_mode_active())
|
|
return false;
|
|
}
|
|
|
|
/*
|
|
* Use the higher rated one, but prefer a CPU local device with a lower
|
|
* rating than a non-CPU local device
|
|
*/
|
|
return !curdev ||
|
|
newdev->rating > curdev->rating ||
|
|
!cpumask_equal(curdev->cpumask, newdev->cpumask);
|
|
}
|
|
|
|
/*
|
|
* Check whether the new device is a better fit than curdev. curdev
|
|
* can be NULL !
|
|
*/
|
|
bool tick_check_replacement(struct clock_event_device *curdev,
|
|
struct clock_event_device *newdev)
|
|
{
|
|
if (!tick_check_percpu(curdev, newdev, smp_processor_id()))
|
|
return false;
|
|
|
|
return tick_check_preferred(curdev, newdev);
|
|
}
|
|
|
|
/*
|
|
* Check, if the new registered device should be used. Called with
|
|
* clockevents_lock held and interrupts disabled.
|
|
*/
|
|
void tick_check_new_device(struct clock_event_device *newdev)
|
|
{
|
|
struct clock_event_device *curdev;
|
|
struct tick_device *td;
|
|
int cpu;
|
|
|
|
cpu = smp_processor_id();
|
|
if (!cpumask_test_cpu(cpu, newdev->cpumask))
|
|
goto out_bc;
|
|
|
|
td = &per_cpu(tick_cpu_device, cpu);
|
|
curdev = td->evtdev;
|
|
|
|
/* cpu local device ? */
|
|
if (!tick_check_percpu(curdev, newdev, cpu))
|
|
goto out_bc;
|
|
|
|
/* Preference decision */
|
|
if (!tick_check_preferred(curdev, newdev))
|
|
goto out_bc;
|
|
|
|
if (!try_module_get(newdev->owner))
|
|
return;
|
|
|
|
/*
|
|
* Replace the eventually existing device by the new
|
|
* device. If the current device is the broadcast device, do
|
|
* not give it back to the clockevents layer !
|
|
*/
|
|
if (tick_is_broadcast_device(curdev)) {
|
|
clockevents_shutdown(curdev);
|
|
curdev = NULL;
|
|
}
|
|
clockevents_exchange_device(curdev, newdev);
|
|
tick_setup_device(td, newdev, cpu, cpumask_of(cpu));
|
|
if (newdev->features & CLOCK_EVT_FEAT_ONESHOT)
|
|
tick_oneshot_notify();
|
|
return;
|
|
|
|
out_bc:
|
|
/*
|
|
* Can the new device be used as a broadcast device ?
|
|
*/
|
|
tick_install_broadcast_device(newdev);
|
|
}
|
|
|
|
#ifdef CONFIG_HOTPLUG_CPU
|
|
/*
|
|
* Transfer the do_timer job away from a dying cpu.
|
|
*
|
|
* Called with interrupts disabled. Not locking required. If
|
|
* tick_do_timer_cpu is owned by this cpu, nothing can change it.
|
|
*/
|
|
void tick_handover_do_timer(void)
|
|
{
|
|
if (tick_do_timer_cpu == smp_processor_id()) {
|
|
int cpu = cpumask_first(cpu_online_mask);
|
|
|
|
tick_do_timer_cpu = (cpu < nr_cpu_ids) ? cpu :
|
|
TICK_DO_TIMER_NONE;
|
|
}
|
|
}
|
|
|
|
/*
|
|
* Shutdown an event device on a given cpu:
|
|
*
|
|
* This is called on a life CPU, when a CPU is dead. So we cannot
|
|
* access the hardware device itself.
|
|
* We just set the mode and remove it from the lists.
|
|
*/
|
|
void tick_shutdown(unsigned int cpu)
|
|
{
|
|
struct tick_device *td = &per_cpu(tick_cpu_device, cpu);
|
|
struct clock_event_device *dev = td->evtdev;
|
|
|
|
td->mode = TICKDEV_MODE_PERIODIC;
|
|
if (dev) {
|
|
/*
|
|
* Prevent that the clock events layer tries to call
|
|
* the set mode function!
|
|
*/
|
|
clockevent_set_state(dev, CLOCK_EVT_STATE_DETACHED);
|
|
dev->mode = CLOCK_EVT_MODE_UNUSED;
|
|
clockevents_exchange_device(dev, NULL);
|
|
dev->event_handler = clockevents_handle_noop;
|
|
td->evtdev = NULL;
|
|
}
|
|
}
|
|
#endif
|
|
|
|
/**
|
|
* tick_suspend_local - Suspend the local tick device
|
|
*
|
|
* Called from the local cpu for freeze with interrupts disabled.
|
|
*
|
|
* No locks required. Nothing can change the per cpu device.
|
|
*/
|
|
void tick_suspend_local(void)
|
|
{
|
|
struct tick_device *td = this_cpu_ptr(&tick_cpu_device);
|
|
|
|
clockevents_shutdown(td->evtdev);
|
|
}
|
|
|
|
/**
|
|
* tick_resume_local - Resume the local tick device
|
|
*
|
|
* Called from the local CPU for unfreeze or XEN resume magic.
|
|
*
|
|
* No locks required. Nothing can change the per cpu device.
|
|
*/
|
|
void tick_resume_local(void)
|
|
{
|
|
struct tick_device *td = this_cpu_ptr(&tick_cpu_device);
|
|
bool broadcast = tick_resume_check_broadcast();
|
|
|
|
clockevents_tick_resume(td->evtdev);
|
|
if (!broadcast) {
|
|
if (td->mode == TICKDEV_MODE_PERIODIC)
|
|
tick_setup_periodic(td->evtdev, 0);
|
|
else
|
|
tick_resume_oneshot();
|
|
}
|
|
}
|
|
|
|
/**
|
|
* tick_suspend - Suspend the tick and the broadcast device
|
|
*
|
|
* Called from syscore_suspend() via timekeeping_suspend with only one
|
|
* CPU online and interrupts disabled or from tick_unfreeze() under
|
|
* tick_freeze_lock.
|
|
*
|
|
* No locks required. Nothing can change the per cpu device.
|
|
*/
|
|
void tick_suspend(void)
|
|
{
|
|
tick_suspend_local();
|
|
tick_suspend_broadcast();
|
|
}
|
|
|
|
/**
|
|
* tick_resume - Resume the tick and the broadcast device
|
|
*
|
|
* Called from syscore_resume() via timekeeping_resume with only one
|
|
* CPU online and interrupts disabled.
|
|
*
|
|
* No locks required. Nothing can change the per cpu device.
|
|
*/
|
|
void tick_resume(void)
|
|
{
|
|
tick_resume_broadcast();
|
|
tick_resume_local();
|
|
}
|
|
|
|
#ifdef CONFIG_SUSPEND
|
|
static DEFINE_RAW_SPINLOCK(tick_freeze_lock);
|
|
static unsigned int tick_freeze_depth;
|
|
|
|
/**
|
|
* tick_freeze - Suspend the local tick and (possibly) timekeeping.
|
|
*
|
|
* Check if this is the last online CPU executing the function and if so,
|
|
* suspend timekeeping. Otherwise suspend the local tick.
|
|
*
|
|
* Call with interrupts disabled. Must be balanced with %tick_unfreeze().
|
|
* Interrupts must not be enabled before the subsequent %tick_unfreeze().
|
|
*/
|
|
void tick_freeze(void)
|
|
{
|
|
raw_spin_lock(&tick_freeze_lock);
|
|
|
|
tick_freeze_depth++;
|
|
if (tick_freeze_depth == num_online_cpus()) {
|
|
trace_suspend_resume(TPS("timekeeping_freeze"),
|
|
smp_processor_id(), true);
|
|
timekeeping_suspend();
|
|
} else {
|
|
tick_suspend_local();
|
|
}
|
|
|
|
raw_spin_unlock(&tick_freeze_lock);
|
|
}
|
|
|
|
/**
|
|
* tick_unfreeze - Resume the local tick and (possibly) timekeeping.
|
|
*
|
|
* Check if this is the first CPU executing the function and if so, resume
|
|
* timekeeping. Otherwise resume the local tick.
|
|
*
|
|
* Call with interrupts disabled. Must be balanced with %tick_freeze().
|
|
* Interrupts must not be enabled after the preceding %tick_freeze().
|
|
*/
|
|
void tick_unfreeze(void)
|
|
{
|
|
raw_spin_lock(&tick_freeze_lock);
|
|
|
|
if (tick_freeze_depth == num_online_cpus()) {
|
|
timekeeping_resume();
|
|
trace_suspend_resume(TPS("timekeeping_freeze"),
|
|
smp_processor_id(), false);
|
|
} else {
|
|
tick_resume_local();
|
|
}
|
|
|
|
tick_freeze_depth--;
|
|
|
|
raw_spin_unlock(&tick_freeze_lock);
|
|
}
|
|
#endif /* CONFIG_SUSPEND */
|
|
|
|
/**
|
|
* tick_init - initialize the tick control
|
|
*/
|
|
void __init tick_init(void)
|
|
{
|
|
tick_broadcast_init();
|
|
tick_nohz_init();
|
|
}
|