Merge branch 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull scheduler changes from Ingo Molnar: "Major changes: - Reworked CPU capacity code, for better SMP load balancing on systems with assymetric CPUs. (Vincent Guittot, Morten Rasmussen) - Reworked RT task SMP balancing to be push based instead of pull based, to reduce latencies on large CPU count systems. (Steven Rostedt) - SCHED_DEADLINE support updates and fixes. (Juri Lelli) - SCHED_DEADLINE task migration support during CPU hotplug. (Wanpeng Li) - x86 mwait-idle optimizations and fixes. (Mike Galbraith, Len Brown) - sched/numa improvements. (Rik van Riel) - various cleanups" * 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (28 commits) sched/core: Drop debugging leftover trace_printk call sched/deadline: Support DL task migration during CPU hotplug sched/core: Check for available DL bandwidth in cpuset_cpu_inactive() sched/deadline: Always enqueue on previous rq when dl_task_timer() fires sched/core: Remove unused argument from init_[rt|dl]_rq() sched/deadline: Fix rt runtime corruption when dl fails its global constraints sched/deadline: Avoid a superfluous check sched: Improve load balancing in the presence of idle CPUs sched: Optimize freq invariant accounting sched: Move CFS tasks to CPUs with higher capacity sched: Add SD_PREFER_SIBLING for SMT level sched: Remove unused struct sched_group_capacity::capacity_orig sched: Replace capacity_factor by usage sched: Calculate CPU's usage statistic and put it into struct sg_lb_stats::group_usage sched: Add struct rq::cpu_capacity_orig sched: Make scale_rt invariant with frequency sched: Make sched entity usage tracking scale-invariant sched: Remove frequency scaling from cpu_capacity sched: Track group sched_entity usage contributions sched: Add sched_avg::utilization_avg_contrib ...
This commit is contained in:
commit
49d2953c72
11 changed files with 692 additions and 237 deletions
|
@ -1123,15 +1123,28 @@ struct load_weight {
|
|||
};
|
||||
|
||||
struct sched_avg {
|
||||
u64 last_runnable_update;
|
||||
s64 decay_count;
|
||||
/*
|
||||
* utilization_avg_contrib describes the amount of time that a
|
||||
* sched_entity is running on a CPU. It is based on running_avg_sum
|
||||
* and is scaled in the range [0..SCHED_LOAD_SCALE].
|
||||
* load_avg_contrib described the amount of time that a sched_entity
|
||||
* is runnable on a rq. It is based on both runnable_avg_sum and the
|
||||
* weight of the task.
|
||||
*/
|
||||
unsigned long load_avg_contrib, utilization_avg_contrib;
|
||||
/*
|
||||
* These sums represent an infinite geometric series and so are bound
|
||||
* above by 1024/(1-y). Thus we only need a u32 to store them for all
|
||||
* choices of y < 1-2^(-32)*1024.
|
||||
* running_avg_sum reflects the time that the sched_entity is
|
||||
* effectively running on the CPU.
|
||||
* runnable_avg_sum represents the amount of time a sched_entity is on
|
||||
* a runqueue which includes the running time that is monitored by
|
||||
* running_avg_sum.
|
||||
*/
|
||||
u32 runnable_avg_sum, runnable_avg_period;
|
||||
u64 last_runnable_update;
|
||||
s64 decay_count;
|
||||
unsigned long load_avg_contrib;
|
||||
u32 runnable_avg_sum, avg_period, running_avg_sum;
|
||||
};
|
||||
|
||||
#ifdef CONFIG_SCHEDSTATS
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue