For !prefer_idle tasks we want to minimize capacity_orig to bias their
scheduling towards more energy efficient CPUs. This does not happen in
the current code for boosted tasks due the order of CPUs considered
(from big CPUs to LITTLE CPUs), and to the shallow idle state and
spare capacity maximization filters, which are used to select the best
idle backup CPU and the best active CPU candidates.
Let's fix this by enabling the above filters only when we are within
same capacity CPUs.
Taking in part each of the two cases:
1. Selection of a backup idle CPU - Non prefer_idle tasks should prefer
more energy efficient CPUs when there are idle CPUs in the system,
independent of the order given by the presence of a boosted margin.
This is the behavior for !sysctl_sched_cstate_aware and this should
be the behaviour for when sysctl_sched_cstate_aware is set as well,
given that we should prefer a more efficient CPU even if it's in a
deeper idle state.
2. Selection of an active target CPU: There is no reason for boosted
tasks to benefit from a higher chance to be placed on big CPU which
is provided by ordering CPUs from bigs to littles.
The other mechanism in place set for boosted tasks (making sure we
select a CPU that fits the task) is enough for a non latency
sensitive case. Also, by choosing a CPU with maximum spare capacity
we also cover the preference towards spreading tasks, rather than
packing them, which improves the chances for tasks to get better
performance due to potential reduced preemption. Therefore, prefer
more energy efficient CPUs and only consider spare capacity for CPUs
with equal capacity_orig.
Change-Id: I3b97010e682674420015e771f0717192444a63a2
Signed-off-by: Ionela Voinescu <ionela.voinescu@arm.com>
Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
Reviewed-by: Patrick Bellasi <patrick.bellasi@arm.com>
Reported-by: Viresh Kumar <viresh.kumar@linaro.org>
Reported-by: Leo Yan <leo.yan@linaro.org>
* IOW, prefer a deep IDLE LITTLE CPU vs a
* shallow idle big CPU.
*/
- if (sysctl_sched_cstate_aware &&
+ if (capacity_orig == target_capacity &&
+ sysctl_sched_cstate_aware &&
best_idle_cstate <= idle_idx)
continue;
*/
/* Favor CPUs with maximum spare capacity */
- if ((capacity_orig - new_util) < target_max_spare_cap)
+ if (capacity_orig == target_capacity &&
+ (capacity_orig - new_util) < target_max_spare_cap)
continue;
target_max_spare_cap = capacity_orig - new_util;