GitHub/LineageOS/android_kernel_motorola_exynos9610.git
6 years agosched: ems: Don't check lbt_bring_overutilize when wake balance
Daeyeong Lee [Thu, 14 Jun 2018 06:56:50 +0000 (15:56 +0900)]
sched: ems: Don't check lbt_bring_overutilize when wake balance

Change-Id: I2b3cd086d0a4329270c7b877967897ce4735e5a0
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
6 years agosamsung: emc: Fix bug that max_constraints violation bug.
Youngtae Lee [Thu, 14 Jun 2018 05:47:16 +0000 (14:47 +0900)]
samsung: emc: Fix bug that max_constraints violation bug.

This fxixes bug that current and real frequency is higher than
max frequency.

Change-Id: I3d488b642cea350e6dcc7d84eab9389d34639555
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
6 years agocpufreq: acme: Add exynos_cpufreq_get_locked.
Youngtae Lee [Thu, 14 Jun 2018 05:44:27 +0000 (14:44 +0900)]
cpufreq: acme: Add exynos_cpufreq_get_locked.

It is accompanied by a lock job to prevent
reading frequency during frequency change

Change-Id: Iaf163321dff7437ad215b200f10589225a73c4f7
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
6 years agosched: fix wrong declaration of inline extern function.
Park Bumgyu [Thu, 14 Jun 2018 05:42:44 +0000 (14:42 +0900)]
sched: fix wrong declaration of inline extern function.

To fix build error, remove inline term at extern declared function.

Change-Id: Id30ffd2f600b514b98cfe9ebd60d80a5fdc463c3
Signed-off-by: Park Bumgyu <bumgyu.park@samsung.com>
6 years agosched: ems: support schedtune.boost in wakeup balance.
Park Bumgyu [Thu, 14 Jun 2018 04:20:46 +0000 (13:20 +0900)]
sched: ems: support schedtune.boost in wakeup balance.

Change-Id: I18938f89a6cf1372c6be96e0d6c769960cd2918c
Signed-off-by: Park Bumgyu <bumgyu.park@samsung.com>
6 years agosched: ems: fix return type of task_util.
Park Bumgyu [Thu, 14 Jun 2018 00:43:16 +0000 (09:43 +0900)]
sched: ems: fix return type of task_util.

The variable type of util_avg is unsigned long. Fix return type
of task util to avoid data loss.

Change-Id: I463b9fa65f018f4d98804df6f3c62fbbb6ff0951
Signed-off-by: Park Bumgyu <bumgyu.park@samsung.com>
6 years agosched: ems: ontime: Modify to check whether fit_cpus is empty.
Daeyeong Lee [Thu, 14 Jun 2018 01:17:52 +0000 (10:17 +0900)]
sched: ems: ontime: Modify to check whether fit_cpus is empty.

- There is a possibility of trouble, when fit_cpus is return with empty.
  To prevent this situation, ontime_select_fit_cpus fucntion return
  whether fit_cpus is empty or not.

Change-Id: Ibcadee7f1c7dd54e074509712ddb3ea05bfc82ef
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
6 years agocpufreq: eff: add HAFM-TB featuring.
Choonghoon Park [Tue, 12 Jun 2018 10:10:47 +0000 (19:10 +0900)]
cpufreq: eff: add HAFM-TB featuring.

Change-Id: I4844df6e79494f6234b5e92bfd5aaf9be1caa04d

6 years agohafm/hafm-tb: modify featuring for managing divided files.
Choonghoon Park [Tue, 12 Jun 2018 09:19:27 +0000 (18:19 +0900)]
hafm/hafm-tb: modify featuring for managing divided files.

Change-Id: I01481cbaf0c3b0b38327dcaec3438a7218d7a2ac

6 years agohafm/hafm-tb: make interface to choose on among P-state boost solutions.
Choonghoon Park [Tue, 12 Jun 2018 09:17:40 +0000 (18:17 +0900)]
hafm/hafm-tb: make interface to choose on among P-state boost solutions.

Change-Id: I3ac5dacf298892e3e75f60263124ff403c13706b

6 years agohafm: add file exynos-hafm.c.
Choonghoon Park [Tue, 12 Jun 2018 06:07:42 +0000 (15:07 +0900)]
hafm: add file exynos-hafm.c.

This file is for featuring hafm.

HIU could trigger HWI requests with power budget but frequency level;
HIU just deliver power budget and doesn't change frequency itself with CPUFreq driver.
This feature could be on by setting CONFIG_EXYNOS_HAFM.

Change-Id: Ieac5f123d41974dd3f1869e0b36d24ac2fd0994a

6 years agohafm-tb: rename exynos-hiu.c to exynos-hafm-tb.c.
Choonghoon Park [Tue, 12 Jun 2018 06:00:15 +0000 (15:00 +0900)]
hafm-tb: rename exynos-hiu.c to exynos-hafm-tb.c.

This file is for featuring hafm-tb.

HIU could request HWI DVFS with power budget.
This feature could be on by setting CONFIG_EXYNOS_HAFM_TB.

Change-Id: I88c5466e50756c44a25410d8955cf10a0c220d1c

6 years agocpufreq: eff: Introduce Exynos FF.
Choonghoon Park [Mon, 29 Jan 2018 10:08:13 +0000 (19:08 +0900)]
cpufreq: eff: Introduce Exynos FF.

Change-Id: Ic26f61d8776f2d2420ed279449f017c2074145ef

[9820] cpufreq: eff: get target function using cpufreq ready callback

Change-Id: I54615aea3d248d584490271a0c30a66f42a2ba00

[9820] cpufreq: eff: make filtering condition more precisely

Filtering conditions
  1) SW request (normal request)
    turbo boost is already activated (cur_freq >= boost_threshold)
    and
    this request could activate turbo boost (req_freq >= boost_threshold)

  2) HWI request
    turbo boost is released (cur_freq < boost_threshold)

Change-Id: I5fc21741706de0c0f26d9b4a15c1e8bcad0d1bd6

[9820] cpufreq: eff: clamp frequency SW requests above boost threshold

In case of normal DVFS request (not HWI request),
clamp target value to boost threshold,
if target value > boost threshold.

SW must not request DVFS with frequency above boost threshold.

Change-Id: Ie2cb26e75d2d172f3cfbe02c0a95ca5eb7700c83

6 years agohiu: Introduce HIU driver.
Choonghoon Park [Mon, 15 Jan 2018 06:11:54 +0000 (15:11 +0900)]
hiu: Introduce HIU driver.

Change-Id: Ib8c490128ab6dcc36fc7f502ded8c2d5b9eddc41

[9820] hiu: sync up H/W and S/W frequency with EFF

Exynos FF and HIU driver cowork to sync up H/W and S/W frequency.

Change-Id: I71689390ca7cf8bea6cfe222b419788c47708ed7

[9820] hiu: update hiu data using cpufreq ready callback

Change-Id: I398899d97541410776b7b780ac72fb87de8ff796

[9820] hiu: modify logging type

Change-Id: I1cc7927255d574462639e6eda4fbd5533eaed890

[9820] hiu: add offset to level for communication with ACPM

Change-Id: Ic37b5c812004f2168c625e6c224e07090a526f6a

[9820] hiu: add field in hiu data for sw power budget limit

When current frequecy is lower than boost threshold,
HIU don't need to request dynamic power bugeting.

SW requests DVFS with fixed power budgeet limit

Change-Id: I2b20d14d39c729552854bd4226e3d862f65c44a4

[9820] hiu: move some functions under helper function category

Change-Id: I5511e5f9a6778cd92dc7fcdfe5ed905e33a45e5c

[9820] hiu: wait for SR1 response in when normal DVFS is requested

Change-Id: Ib6676d2b586ec11c208f02cb9bbac77dc1acb281

[9820] hiu: make sr1_check_loop more stable

Change-Id: I9fefe89eb4a631238789e34629882447d6bc4adc

[9820] hiu: give normal DVFS request higher prio than turbo boost

If
1) normal DVFS request and trubo boost request tb_threshold DVFS value and
2) turbo boost request get to know SR1 write and clear SR1 write bit,
then normal DVFS will be stuck in loop until other SR1 write comes.

It means that other normal DVFS requests will be stuck by mutex
the normal DVFS request stuck has.

This patch is for solving the problem.

Change-Id: I6671fa1149cdc657560df1bdf3d8ac6929d95ac9

[9820] hiu: enable/disable power constraint and turbo boost using dt

Change-Id: I89724d836be9b9a5475d85981a047a423372c9b6

[9820] hiu: modify API & polling thread for processing DVFS triggered by HIU HW

Change-Id: I11afdf7a32a492e19faf347a43eaa81f9b22949d

[9820] hiu: synchronize SW request and HW request using mutex

Change-Id: I7407a29da0a23422a54a70fa33f7619198faf996

[9820] hiu: do not adjust max frequency

If cpufreq_update_policy is not called,
policy's max frequency could be mis-set.

Therefore, remove max frequency update code
in cpufreq policy callback of HIU

Change-Id: I5cf7073532bc885b360cb0b47764e824fa958c52

[9820] hiu: do not write SR0 with boost threshold when turbo boosting

Change-Id: I575f61f6994f56d2c2498c0371a27f9e32dd5ea7

[9820] hiu: wait for updating sw structure by hwidvfs

Change-Id: Ie8e1387aeb2ca355679f340b7480eed0f7b1361a

[9820] hiu: do not update DVFSLIMIT unnecessarily

Change-Id: Ic3dabb9ee8ada067827dc597bbd3efc53a6cbac3

[9820] hiu: stop polling when turbo boost is released

Change-Id: I58528ee2a93ad7ec8de091daa9e0554cc8bca05f

[9820] hiu: increase polling term

Change-Id: Icd086d3231f4b1b3f2ae399511d7eb5659b6460c

[9820] hiu: check normal dvfs done in API

Change-Id: Ifec361cd4a146cb633f8b82e469d24b6cda895e6

[9820] hiu: locking when updating limit dvfs

Change-Id: I56aa95b5d14c4d89daff1708c8b79131ac0b403e

[9820] hiu: set boost level increments using dt

Change-Id: I0247aa315a20f634f5479d4ead34645321e96ea4

[9820] hiu: deal with hwi dvfs using hwi_dvfs_flag

Change-Id: Ib50268be24994751cab07c9d460d349509426fc5

[9820] hiu: update boost_max using cpufreq policy

Change-Id: I0b34be80c4a5d652e241f29d6318c23bf9ee032f

[9820] hiu: modify condition to set limit dvfs

Condition:
Only when clipped freq is higher than or equal to boost threshold

Change-Id: I588e4b0d5828f3f59ccf182674fc3aa65c920ef4

[9820] hiu: define polling period using macro

Change-Id: Idd74c36d9875f415e2bcc3f08f25a752f78d483c

[9820] hiu: bind polling thread to little

Change-Id: I54862683a1a9d765fb80d2008e0da7529c9fb23b

[9820] hiu: restore hiu data after CHT cluster exits CPD

Change-Id: I342b6e3243755f6da49374a4da76f02affe46bcc

[9820] hiu: use usleep_range instead of udelay

Change-Id: Id19a14242b2d1b25d13b540104b2da0a5543bc16

[9820] hiu: set limit dvfs with hiu data clipped_freq

Change-Id: I6e289246800d28f0f990b4a95ce1c49d084a0395

[9820] hiu: determine SR0 write in set_freq API

Change-Id: Ic5c46476b9ea6ef0b0a418a42fb2085bc7b796c1

[9820] hiu: refactoring functionality to write on SR0

Change-Id: I42ebceba7946f69fdf4242df8dbbecf5be6732bf

[9820] hiu: find policy using online cpu

Change-Id: I4f3ad4d13f348765399290b61fefa2e5a1ca150d

[9820] hiu: create polling thread when probing

Change-Id: I7a4f22b6f573e50e2eedb52aa9e54490109ecf70

[9820] hiu: lock just before cur_freq could be updated

Change-Id: I8e0d7eb2e67af8b6633c429041c6ae08b1b69e5f

[9820] hiu: force work to run only on big

Change-Id: I9a5230f407722a2293eb78fa8a1362797c0407c2

6 years agocpufreq: acme: add API for ready callback.
Choonghoon Park [Mon, 19 Mar 2018 00:42:04 +0000 (09:42 +0900)]
cpufreq: acme: add API for ready callback.

Change-Id: I9b6e9041354be94b2be2a59905af7d4769aaf646

6 years agoocp: Initial patch for OCP handler.
Daeyeong Lee [Thu, 4 May 2017 02:20:02 +0000 (11:20 +0900)]
ocp: Initial patch for OCP handler.

Change-Id: I17290b920385baa232feb2c6501e4ea568e80551
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
[9820] ocp: Move OCP register initialization to EL3

Change-Id: I76c0414fe88b4527737995ce46a6c9ec46b01537
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
[9820] ocp: Change the policy when en/disable interrupt and OCP controller

- Enable BPC interrupt only when OCP situration.
  Without OCP situration, the max frequency limit is cleared.
  The BPC interrupt is necessary to release the max frequency limit,
  so it is not necessary when the max frequency limit is off.

- Do not fix the OCP controller in the standby state during the OCP interrupt handling interval.
  Even during OCP interrupt handling,
  the OCP controller should prevent the system down through the uArch throttle.

Change-Id: I541204681f9d6dedfa5e8a5bf33e43b5de432695
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
[9820] ocp: MK errata 57068: Modify to only meerkat cores access OCP controller

- According to MK errata 57068, there may be a problem
  when accessing the GCU inside the MK Cluster NONCPU block on the APB bus.
  Therefore, the access to the GCU has been modified
  so that it can be performed only in the MR core.

Change-Id: I2476e0be8151a2f61baf325498e83fb90776dcd7
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
[9820] ocp: MK errata 57068: Use mrs/msr to access OCP controller instead of ioremap

Change-Id: I7595aced5b53a06cf598cc785d0e24ec906b20cb
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
[9820] ocp: MK errata 58402: Should not use DPM based function

 - To avoid bug 58402, BPC interrupt should not be used.
   Therefore, it is decided to use the timer to release the max frequency
   that restricted by OCP interrupt.

Change-Id: I86e4328594c898e01ed251e3afce80dbd6377c37
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
[9820] ocp: Combine exynos-ocp.h with exynos-ocp.c

Change-Id: Ifbb40192ac6ffe090d27d6a814e08cbed0275d96
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
[9820] ocp: Add code to change ocp flag to false when ocp is released

Change-Id: Ia66312da1a3cc3897eb9f32def362dd14e02c187
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
[9820] ocp: s2mps18: Use currentmeter info for determining BPC condition

Change-Id: Ia349ee985338d2d4bed70c4de2fa10b2da871d9a
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
[9820] ocp: Add sysfs attributes to show ocp stats

total_trans : show how many ocp max limit is changed. (wrap-around value)
time_in_state : show how long ocp max limit is held at each frequency.
clipped_freq : show current ocp max limit level.

Change-Id: I60597ed6b1d080ead0dded0cca62103d7368bc02
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
[9820] ocp: Modify OCP handler to set ocp max limit according to current limit

Previously, OCP handler set ocp max limit to lower than curruent frequency by down_step.
However this entangled with thermal and caused unintended behavior.
So modify OCP handler to set ocp max limit according to current limit,
instead of current frequency.

Change-Id: I773be8d0186e4549a1165644507a3b6799c82eeb
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
[9820] ocp: trace: power: Add trace_ocp_max_limit

Change-Id: I2448c962eaeafc0d4e16f2314c9f6c109377fa47
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
[9820] ocp: Change pr_info to pr_debug for debug information

Change-Id: Ia79682f547ce4ac184bd1acb6c5a94ed428375f9
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
[9820] ocp: Align code for readability

Change-Id: I5b65a82f418f6f74a8cab465e19cee9366f0f198
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
[9820] ocp: Add exynos-snapshot log

Change-Id: I21d2017880e48ccdfd6d17e77a2ca1133acc5e09
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
[9820] ocp: Add sysfs node to enable/disable OCP interrupt handling

User can control the ocp handling operation as below:
- echo 1 > /sys/devices/platform/exynos-ocp/ocp/enabled => enable
- echo 0 > /sys/devices/platform/exynos-ocp/ocp/enabled => disable

Change-Id: Ia4128f4b72c0cf68199cb563406fc5f806a6b6b5
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
[9820] ocp: Remove setting current dvfs level to OCP controller

Change-Id: Ie862c51256d6e1f866a26de70f67176a7099d4bc
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
[9820] ocp: Modify the way to get initial max/min frequency

"policy->min/max" could be changed by thermal already,
when the OCP probe function attempts to read the initial value
Therefore, modify OCP probe function to read the values of policy->user_policy.min/max

Change-Id: I605bfa01271bcfd6df28a1092e930587293e6877
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
[9820] ocp: s2mps19: change PMIC driver to s2mps19

Change-Id: I9869be76a54ae66d3daeefc2b5c2ed383619818e

[9820] ocp: update IRP value in OCPTOPPWRTHRESH instead of OCPINTCTL

In 9820, register map is changed; IRP field in OCPINTCTL is RO.
Therefore change register for IRP value update into OCPTOPPWRTHRESH.

Change-Id: If642179607edfc5fbff1d9973fadfef9d97d17a7

[9820] ocp: Use cpu load information for BPC condition

Change-Id: Ie65699ccb649c33f7ca7a704bc50764c002a5171
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
[9820] ocp: modify exynos-ss to debug-snapshot

Change-Id: I95c6fd983a23630d42b5045389fa39fc175dce23

6 years agosched: frt: Add the rt_rq load update.
Johnlay Park [Tue, 12 Jun 2018 11:03:25 +0000 (20:03 +0900)]
sched: frt: Add the rt_rq load update.

Change-Id: I676ecfa2aec75c46144f78fc90981ff43c8833c0
Signed-off-by: Johnlay Park <jonglae.park@samsung.com>
6 years agothermal: samsung: Support boost ctrl callback.
Hyeonseong Gil [Tue, 12 Jun 2018 01:51:13 +0000 (10:51 +0900)]
thermal: samsung: Support boost ctrl callback.

Change-Id: I2e50e85b9194456cf406e910ec75a9a5323bfa99
Signed-off-by: Hyeonseong Gil <hs.gil@samsung.com>
6 years agosched: ems: ontime: Use get_cpu_mips instead of capacity_orig_of.
Daeyeong Lee [Tue, 12 Jun 2018 06:15:42 +0000 (15:15 +0900)]
sched: ems: ontime: Use get_cpu_mips instead of capacity_orig_of.

- The value of capacity_orig_of can be changed at runtime.
  Ontime feature need to use stable value that indicating performance of cpu.
  So use get_cpu_mips instead of capacity_orig_of.

Change-Id: If249f6841cc26abce573459d8199004beccdeac8
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
6 years agosched: ems: add function to get cpu mips.
Park Bumgyu [Tue, 12 Jun 2018 06:12:22 +0000 (15:12 +0900)]
sched: ems: add function to get cpu mips.

Change-Id: I79918451a93bddf1effe7d8ed6a65a0176886012
Signed-off-by: Park Bumgyu <bumgyu.park@samsung.com>
6 years agosched: ems: prevent access to plugged out cpu.
Park Bumgyu [Tue, 12 Jun 2018 01:23:31 +0000 (10:23 +0900)]
sched: ems: prevent access to plugged out cpu.

Change-Id: Id9e0ac5cb1979cd8d3766f9fb1a7c0874a561e7b
Signed-off-by: Park Bumgyu <bumgyu.park@samsung.com>
6 years agosamsung: emc: Change mode selection alg for 3-cluster.
Youngtae Lee [Mon, 11 Jun 2018 02:10:49 +0000 (11:10 +0900)]
samsung: emc: Change mode selection alg for 3-cluster.

remove imbal_heavy_cpus becuase, if there are just 2 cores in a cluster,
imbal_heavy_cpus is cause that mode couldn't be changed dual.

Support ldsum concept.
If ldsum of big/mid cluster is lower than ldsum_thr of a mode,
emc selects the mode regardless heavy_cpu count. Maybe it helps
changing dual more easy.

Support disabling domain_busy
If domain_busy_ratio is 0, emc doensn't check whether
domain is busy or not.
now, disable busy_ratio for big/mid clsuter.

Change-Id: Ia510d7da9609a0cce556f6bce30bb7f7218697bc
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
6 years agosched: schedutil: remove update_single function.
Youngtae Lee [Thu, 26 Apr 2018 12:27:22 +0000 (21:27 +0900)]
sched: schedutil: remove update_single function.

Change-Id: I870028b8b159501e79730f226e8c46c0c2bff50f
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
6 years agocpupm: change condition for cpuhp_last_cpu mask.
Youngtae Lee [Mon, 21 May 2018 05:03:20 +0000 (14:03 +0900)]
cpupm: change condition for cpuhp_last_cpu mask.

Change-Id: I2a276317d3857db99317f9314e5fe567b7bcc96d
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
6 years agoRevert "cpu: Add function to confirm last cpu of cluster".
Youngtae Lee [Mon, 21 May 2018 05:04:03 +0000 (14:04 +0900)]
Revert "cpu: Add function to confirm last cpu of cluster".

This reverts commit 0ef067b178779bee49c8cddf5ad9ff07eade0512.

Change-Id: I0defd8ea13a5435cedac4a4901033977e2e67975

6 years agosamsung: emc: Change max frequency control method.
Youngtae Lee [Wed, 16 May 2018 11:55:51 +0000 (20:55 +0900)]
samsung: emc: Change max frequency control method.

To fix bug that max frequency is not updated
when all cpus of cluster power down, add pre_update_constraints

Change-Id: Ie5c408dd5e4d1713868ef07c19c39989c45d1a21
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
6 years agocpu: Add CPUHP_EXYNOS_BOOST_CTRL_PRE event.
Youngtae Lee [Wed, 16 May 2018 11:59:53 +0000 (20:59 +0900)]
cpu: Add CPUHP_EXYNOS_BOOST_CTRL_PRE event.

Change-Id: I062f4fe5abae7fed4466b70a3e6176a3bf405249
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
6 years agocpufrqe: acme: remove exynos_cpufreq_allow_change_max.
Youngtae Lee [Wed, 16 May 2018 04:42:25 +0000 (13:42 +0900)]
cpufrqe: acme: remove exynos_cpufreq_allow_change_max.

Change-Id: Ibe558896a0fdd3c7daf8f844a8bff50bc54348f8
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
6 years agosamsung: emc: change timer add condition.
Youngtae Lee [Tue, 15 May 2018 01:18:40 +0000 (10:18 +0900)]
samsung: emc: change timer add condition.

If req_mode and cur_mode is same, skip adding timer

Change-Id: I80ba564f705c131609e0b96ff99126a4495ae3dd
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
6 years agosamsung: emc: change real frequency check function.
Youngtae Lee [Mon, 14 May 2018 02:08:23 +0000 (11:08 +0900)]
samsung: emc: change real frequency check function.

Use acme driver old frequency or real cmu value instead of policy->cur

Change-Id: Ide22cf698d0177bcebdba45d980886124ee3c74f
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
6 years agoDEBUG: Add trace for cpus_up/down.
Youngtae Lee [Mon, 23 Apr 2018 12:06:54 +0000 (21:06 +0900)]
DEBUG: Add trace for cpus_up/down.

Change-Id: Ic19ff47bd706813bafc99f2d3e421d846acb5c42
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
6 years agocpu: Add function to confirm last cpu of cluster.
Youngtae Lee [Wed, 9 May 2018 11:41:46 +0000 (20:41 +0900)]
cpu: Add function to confirm last cpu of cluster.

It shows whether last cpu of fasthp_cpus or not

Change-Id: Ibf7c6913d78ad32422572759b5576545c85302f5
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
6 years agosamsung: cpupm: Change cluster disable condition
Youngtae Lee [Wed, 9 May 2018 11:36:07 +0000 (20:36 +0900)]
samsung: cpupm: Change cluster disable condition

1. reference cluster_mask instead of coregroup_mask because
   coregroup_mask no more couldn't show h/w clsuter information.

2. To support fast_hotplug, add function to confirm last cpu in the cluster.
   If many cpus power off at the sametime, cpu_online_mask couldn't guarantee
   last cpu of cluster.

Change-Id: Ic08fbdfa6ab981cf48d6c3f5e294f10d1f1a907a
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
6 years agoarm64: psci: Add affinity_lv for hotplug
Youngtae Lee [Fri, 4 May 2018 06:48:10 +0000 (15:48 +0900)]
arm64: psci: Add affinity_lv for hotplug

To indecating hotplug with cluster power down,
pass affinity_level EL3 by PSCI.

Change-Id: I4687c828e26150485e6ff426a815562322587ee9
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
6 years agosamsung: cpupm: Add cpuhp_last_cpu mask to indecating last_cpu
Youngtae Lee [Fri, 4 May 2018 06:26:16 +0000 (15:26 +0900)]
samsung: cpupm: Add cpuhp_last_cpu mask to indecating last_cpu

this mask shows cpus that should perform
the cluster power down sequence.

Change-Id: If41ecb630191674dc47995808c383688aa0d8d55
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
6 years agosamsung: emc: fix bug that return wrong pointer of head list
Youngtae Lee [Fri, 4 May 2018 01:26:58 +0000 (10:26 +0900)]
samsung: emc: fix bug that return wrong pointer of head list

Change-Id: I6ebfa125f5c60255dfcfc6cfe10510e273246559
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
6 years agosamsung: emc: change print wrong message in pwr_check func
Youngtae Lee [Wed, 2 May 2018 08:48:17 +0000 (17:48 +0900)]
samsung: emc: change print wrong message in pwr_check func

Change-Id: If017f31c8ea587bd4aeeaedc40f4244e3779da7c
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
6 years agosamsung: cpuhp: Add panic condition when requesting hotplug cpu0 out
Youngtae Lee [Wed, 2 May 2018 05:15:02 +0000 (14:15 +0900)]
samsung: cpuhp: Add panic condition when requesting hotplug cpu0 out

Change-Id: I24152b1f32b82c6d413f8a0158f508f08b696679
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
6 years agosamsung: emc: set base mode when user_mode is disabled
Youngtae Lee [Wed, 2 May 2018 02:27:23 +0000 (11:27 +0900)]
samsung: emc: set base mode when user_mode is disabled

Change-Id: I12176542ddcba548913d5c0c0d14e81c4b9c9b6e
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
6 years agocpufreq: acme: get_freq returns cached freq when access offline cluster
Youngtae Lee [Wed, 2 May 2018 01:54:04 +0000 (10:54 +0900)]
cpufreq: acme: get_freq returns cached freq when access offline cluster

Change-Id: Id3a3fb04256664644a4a3daf61f2764e0c10462e
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
6 years agocpufreq: acme: Add function to check max_constraints of boost domain.
Youngtae Lee [Fri, 27 Apr 2018 06:29:48 +0000 (15:29 +0900)]
cpufreq: acme: Add function to check max_constraints of boost domain.

Change-Id: I956942e6ed1ea6e1da8ccd1f747d9cf503b95b2b
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
6 years agosamsung: emc: Add function to check target_freq is available
Youngtae Lee [Fri, 27 Apr 2018 06:28:58 +0000 (15:28 +0900)]
samsung: emc: Add function to check target_freq is available

Change-Id: Ie957e3d0c8064f5d8b2fa324fb1205d373554cf6
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
6 years agocpu: support cpus_up for fast hotplug in
Youngtae Lee [Fri, 27 Apr 2018 04:49:39 +0000 (13:49 +0900)]
cpu: support cpus_up for fast hotplug in

Change-Id: I5dc7103355eed846fdb839fbf721547055f5297f
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
6 years agosamsung: emc: change msleep to udelay and expired time for power checking
Youngtae Lee [Thu, 26 Apr 2018 12:31:48 +0000 (21:31 +0900)]
samsung: emc: change msleep to udelay and expired time for power checking

Change-Id: I274968b7291ba0f23fcd1943e4dd2116ce62b680
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
6 years agoarm64: smp: change print level of "shutdown"
Youngtae Lee [Thu, 26 Apr 2018 12:29:58 +0000 (21:29 +0900)]
arm64: smp: change print level of "shutdown"

Change-Id: Ieb68cd68af9c7b98fb2fc0c6399be28766212719
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
6 years agosamsung: emc: Change priority of change thread
Youngtae Lee [Thu, 26 Apr 2018 05:48:29 +0000 (14:48 +0900)]
samsung: emc: Change priority of change thread

Change-Id: I480a38e996aa94781811c18988d3c1232b5d1327
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
6 years agobase: topology: Don't add/remove sysfs for hotplug
Youngtae Lee [Thu, 26 Apr 2018 04:56:00 +0000 (13:56 +0900)]
base: topology: Don't add/remove sysfs for hotplug

To reduce hotplug time, don't add or remove sysfs group

Change-Id: I1b7cab11e9157f5291492d8412e4e815a6cec96c
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
6 years agocpu: Support to paraller takedown_cpus
Youngtae Lee [Tue, 24 Apr 2018 10:03:37 +0000 (19:03 +0900)]
cpu: Support to paraller takedown_cpus

Change-Id: Ia40b4c61adf95de510832bfd0d5fa4db0adda06d
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
6 years agoRevert "smp/hotplug: Differentiate the AP-work lockdep class between up and down"
Youngtae Lee [Tue, 24 Apr 2018 08:47:28 +0000 (17:47 +0900)]
Revert "smp/hotplug: Differentiate the AP-work lockdep class between up and down"

This reverts commit 5f4b55e10645b7371322c800a5ec745cab487a6c.

Change-Id: I841ec3c7ecd6197ae30c52987c8d6e86655d453e

6 years agoRevert "cpu/hotplug: Convert hotplug locking to percpu rwsem"
Youngtae Lee [Tue, 24 Apr 2018 08:44:52 +0000 (17:44 +0900)]
Revert "cpu/hotplug: Convert hotplug locking to percpu rwsem"

This reverts commit fc8dffd379ca5620664336eb895a426b42847558.

Change-Id: I842b5ee4c622f1a892a1a292f5a4ad886943320c

6 years agocpu: change order and mehod for sched_active/deactive
Youngtae Lee [Mon, 23 Apr 2018 11:16:21 +0000 (20:16 +0900)]
cpu: change order and mehod for sched_active/deactive

Change-Id: Ia456ad0febd5c63be8390f74ec6a80c6a60499f4
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
6 years agosamsung: cpupm: Disable idle during hotplug in-out
Youngtae Lee [Mon, 23 Apr 2018 07:29:22 +0000 (16:29 +0900)]
samsung: cpupm: Disable idle during hotplug in-out

Change-Id: I4eaff38545bf584b3867da9078b16bebaf42499d
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
6 years agokernel: cpu: support to processe paraller cpus hp
Youngtae Lee [Mon, 23 Apr 2018 07:23:25 +0000 (16:23 +0900)]
kernel: cpu: support to processe paraller cpus hp

Change-Id: I0dc6c787a773b7784d7fc0ab566e47a49835b00b
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
6 years agosched: schedutil: remove syncronize_rcu for fast hp
Youngtae Lee [Mon, 23 Apr 2018 04:06:33 +0000 (13:06 +0900)]
sched: schedutil: remove syncronize_rcu for fast hp

Change-Id: I3eebe5d8e12fc8fe5bc0810949c76883cb0564be
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
6 years agocpufreq: Add fast_on/offline for fast hotplug
Youngtae Lee [Mon, 23 Apr 2018 04:00:19 +0000 (13:00 +0900)]
cpufreq: Add fast_on/offline for fast hotplug

This function performs fast hotplug processing
on fast hp cpus at once. Even if cluster off,
executes only goernor stop & start for fast hp

Change-Id: I288ebd19ee0fbb91f596234eac0fb11fd71573c5
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
6 years agosamsung: cpuhp: Support cpus_up/down for fast hotplug
Youngtae Lee [Mon, 23 Apr 2018 03:59:32 +0000 (12:59 +0900)]
samsung: cpuhp: Support cpus_up/down for fast hotplug

Change-Id: I02aa69a55c80e3e1ff43a2eb790b01adee6a78c7
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
6 years agocpu: Support cpus_up/dow for fast hotplug
Youngtae Lee [Mon, 23 Apr 2018 03:57:31 +0000 (12:57 +0900)]
cpu: Support cpus_up/dow for fast hotplug

Change-Id: Ib0b3da952426338a4afc7e388c9fd5d6874cdce7
Signed-off-by: Youngtae Lee <yt0729.lee@samsung.com>
6 years agosched: ems: introduce task band
Park Bumgyu [Fri, 25 May 2018 05:01:52 +0000 (14:01 +0900)]
sched: ems: introduce task band

Change-Id: Ic3fbe3e80c8033f5c1c77f02cb0eeb6ee04d9630
Signed-off-by: Park Bumgyu <bumgyu.park@samsung.com>
6 years agosched: fair: Add support to PELT ramp/decay timings
lakkyung.jung [Tue, 24 Apr 2018 14:02:12 +0000 (23:02 +0900)]
sched: fair: Add support to PELT ramp/decay timings

Change-Id: If12dd8b4df211c898667cdb7c8b42d0eba9ac200
Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
Signed-off-by: lakkyung.jung <lakkyung.jung@samsung.com>
6 years agosched: fair/ems: Add schedtune_util_est
lakkyung.jung [Fri, 4 May 2018 11:16:53 +0000 (20:16 +0900)]
sched: fair/ems: Add schedtune_util_est

Change-Id: I0a0f1723356683829ce709ec750f4f013aa1c75b
Signed-off-by: lakkyung.jung <lakkyung.jung@samsung.com>
6 years agosched: tune: Add utilest interface to schedtune.
lakkyung.jung [Fri, 4 May 2018 01:20:03 +0000 (10:20 +0900)]
sched: tune: Add utilest interface to schedtune.

Change-Id: I4e5313f7128f5aa599b7214eaf13679d1f9484ef
Signed-off-by: lakkyung.jung <lakkyung.jung@samsung.com>
6 years agosched: fair/ems: Add to apply util-est to wake up balance.
lakkyung.jung [Mon, 16 Apr 2018 14:05:00 +0000 (23:05 +0900)]
sched: fair/ems: Add to apply util-est to wake up balance.

Change-Id: Ia3ff1303d3180612308399d0f311d6c278ddefa9
Signed-off-by: lakkyung.jung <lakkyung.jung@samsung.com>
6 years agosched/events: Introduce util_est trace events
lakkyung.jung [Mon, 16 Apr 2018 13:22:43 +0000 (22:22 +0900)]
sched/events: Introduce util_est trace events

Change-Id: I22c98bbaa7dda598d31a20b310afbf16d5fb8208
Signed-off-by: lakkyung.jung <lakkyung.jung@samsung.com>
6 years agosched/fair: update util_est only on util_avg updates
lakkyung.jung [Mon, 16 Apr 2018 06:46:16 +0000 (15:46 +0900)]
sched/fair: update util_est only on util_avg updates

The estimated utilization of a task is currently updated every time the
task is dequeued. However, to keep overheads under control, PELT signals
are effectively updated at maximum once every 1ms.

Thus, for really short running tasks, it can happen that their util_avg
value has not been updates since their last enqueue.  If such tasks are
also frequently running tasks (e.g. the kind of workload generated by
hackbench) it can also happen that their util_avg is updated only every
few activations.

This means that updating util_est at every dequeue potentially introduces
not necessary overheads and it's also conceptually wrong if the util_avg
signal has never been updated during a task activation.

Let's introduce a throttling mechanism on task's util_est updates
to sync them with util_avg updates. To make the solution memory
efficient, both in terms of space and load/store operations, we encode a
synchronization flag into the LSB of util_est.enqueued.
This makes util_est an even values only metric, which is still
considered good enough for its purpose.
The synchronization bit is (re)set by __update_load_avg_se() once the
PELT signal of a task has been updated during its last activation.

Such a throttling mechanism allows to keep under control util_est
overheads in the wakeup hot path, thus making it a suitable mechanism
which can be enabled also on high-intensity workload systems.
Thus, this now switches on by default the estimation utilization
scheduler feature.

Change-Id: Ia548c1fa33ab1e9d20faa0bf7503ebaba5946063
Suggested-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Paul Turner <pjt@google.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: linux-kernel@vger.kernel.org
6 years agosched/cpufreq_schedutil: use util_est for OPP selection
lakkyung.jung [Mon, 16 Apr 2018 02:23:13 +0000 (11:23 +0900)]
sched/cpufreq_schedutil: use util_est for OPP selection

- backport util-est from linux-power.git

When schedutil looks at the CPU utilization, the current PELT value for
that CPU is returned straight away. In certain scenarios this can have
undesired side effects and delays on frequency selection.

For example, since the task utilization is decayed at wakeup time, a
long sleeping big task newly enqueued does not add immediately a
significant contribution to the target CPU. This introduces some latency
before schedutil will be able to detect the best frequency required by
that task.

Moreover, the PELT signal build-up time is a function of the current
frequency, because of the scale invariant load tracking support. Thus,
starting from a lower frequency, the utilization build-up time will
increase even more and further delays the selection of the actual
frequency which better serves the task requirements.

In order to reduce this kind of latencies, we integrate the usage
of the CPU's estimated utilization in the sugov_get_util function.
This allows to properly consider the expected utilization of a CPU which,
for example, has just got a big task running after a long sleep period.
Ultimately this allows to select the best frequency to run a task
right after its wake-up.

Change-Id: Ibf98a4be222546733cbd88b9a8f2c8858319dd96
Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Cc: Viresh Kumar <viresh.kumar@linaro.org>
Cc: Paul Turner <pjt@google.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-pm@vger.kernel.org
6 years agosched/fair: use util_est in LB and WU paths
lakkyung.jung [Mon, 16 Apr 2018 02:07:57 +0000 (11:07 +0900)]
sched/fair: use util_est in LB and WU paths

When the scheduler looks at the CPU utilization, the current PELT value
for a CPU is returned straight away. In certain scenarios this can have
undesired side effects on task placement.

For example, since the task utilization is decayed at wakeup time, when
a long sleeping big task is enqueued it does not add immediately a
significant contribution to the target CPU.
As a result we generate a race condition where other tasks can be placed
on the same CPU while it is still considered relatively empty.

In order to reduce this kind of race conditions, this patch introduces the
required support to integrate the usage of the CPU's estimated utilization
in the wakeup path, via cpu_util_wake(), as well as in the load-balance
path, via cpu_util() which is used by update_sg_lb_stats().

The estimated utilization of a CPU is defined to be the maximum between
its PELT's utilization and the sum of the estimated utilization (at
previous dequeue time) of all the tasks currently RUNNABLE on that CPU.
This allows to properly represent the spare capacity of a CPU which, for
example, has just got a big task running since a long sleep period.

Change-Id: Iab14de2d509a15974c3176f091c0d5197cdbd081
Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Cc: Viresh Kumar <viresh.kumar@linaro.org>
Cc: Paul Turner <pjt@google.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-pm@vger.kernel.org
6 years agosched/fair: add util_est on top of PELT
lakkyung.jung [Mon, 16 Apr 2018 01:25:28 +0000 (10:25 +0900)]
sched/fair: add util_est on top of PELT

 - backport util-est from linux-power.git

The util_avg signal computed by PELT is too variable for some use-cases.
For example, a big task waking up after a long sleep period will have its
utilization almost completely decayed. This introduces some latency before
schedutil will be able to pick the best frequency to run a task.

The same issue can affect task placement. Indeed, since the task
utilization is already decayed at wakeup, when the task is enqueued in a
CPU, this can result in a CPU running a big task as being temporarily
represented as being almost empty. This leads to a race condition where
other tasks can be potentially allocated on a CPU which just started to run
a big task which slept for a relatively long period.

Moreover, the PELT utilization of a task can be updated every [ms], thus
making it a continuously changing value for certain longer running
tasks. This means that the instantaneous PELT utilization of a RUNNING
task is not really meaningful to properly support scheduler decisions.

For all these reasons, a more stable signal can do a better job of
representing the expected/estimated utilization of a task/cfs_rq.
Such a signal can be easily created on top of PELT by still using it as
an estimator which produces values to be aggregated on meaningful
events.

This patch adds a simple implementation of util_est, a new signal built on
top of PELT's util_avg where:

    util_est(task) = max(task::util_avg, f(task::util_avg@dequeue))

This allows to remember how big a task has been reported by PELT in its
previous activations via f(task::util_avg@dequeue), which is the new
_task_util_est(struct task_struct*) function added by this patch.

If a task should change its behavior and it runs longer in a new
activation, after a certain time its util_est will just track the
original PELT signal (i.e. task::util_avg).

The estimated utilization of cfs_rq is defined only for root ones.
That's because the only sensible consumer of this signal are the
scheduler and schedutil when looking for the overall CPU utilization
due to FAIR tasks.
For this reason, the estimated utilization of a root cfs_rq is simply
defined as:

    util_est(cfs_rq) = max(cfs_rq::util_avg, cfs_rq::util_est::enqueued)

where:

    cfs_rq::util_est::enqueued = sum(_task_util_est(task))
                                 for each RUNNABLE task on that root cfs_rq

It's worth to note that the estimated utilization is tracked only for
objects of interests, specifically:
 - Tasks: to better support tasks placement decisions
 - root cfs_rqs: to better support both tasks placement decisions as
                 well as frequencies selection

Change-Id: Ic5cad5aab372a8a247024e5304e4a55191fe16ea
Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Cc: Viresh Kumar <viresh.kumar@linaro.org>
Cc: Paul Turner <pjt@google.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-pm@vger.kernel.org
6 years agoarm64: dtsi: Modify lbt ratio to spread task within coregroup.
lakkyung.jung [Thu, 12 Jul 2018 03:58:44 +0000 (12:58 +0900)]
arm64: dtsi: Modify lbt ratio to spread task within coregroup.

Change-Id: I9eba9ffc0c9b9f42d9daca3474cd0570d6a45db4
Signed-off-by: lakkyung.jung <lakkyung.jung@samsung.com>
6 years agoarm64: dtsi: Modify ontime node structure.
lakkyung.jung [Thu, 12 Jul 2018 01:57:44 +0000 (10:57 +0900)]
arm64: dtsi: Modify ontime node structure.

 - Modify to define ontime conditions for each coregroup, not for each step
 - Rename threshold to boundary
 - Remove min-residency-us condition
 - Add coverage-ratio condition

Change-Id: I44e31dd1c68f017c288739699c74973c6d6d2107
Signed-off-by: lakkyung.jung <lakkyung.jung@samsung.com>
6 years agosched: ems: ontime: Rename ontime threshold to boundary.
Daeyeong Lee [Mon, 4 Jun 2018 06:57:36 +0000 (15:57 +0900)]
sched: ems: ontime: Rename ontime threshold to boundary.

Change-Id: I124f16d1cc884884fe0f58de5e871b53da6c1372
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
6 years agosched: ems: ontime: Clear unnecessary sequence to migrate ontime task.
Daeyeong Lee [Thu, 31 May 2018 07:41:40 +0000 (16:41 +0900)]
sched: ems: ontime: Clear unnecessary sequence to migrate ontime task.

Change-Id: I29083497168ad57712394d12ba98d1997f5a6cba
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
6 years agosched: ems: ontime: Change new entity's initial ontime load policy.
Daeyeong Lee [Wed, 23 May 2018 06:10:24 +0000 (15:10 +0900)]
sched: ems: ontime: Change new entity's initial ontime load policy.

Change-Id: I4688cd1fb459ca74092b386356843b37d361b07a
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
6 years agosched: ems: ontime: Allow to migrate to active core within coverage ratio.
Daeyeong Lee [Fri, 18 May 2018 06:51:50 +0000 (15:51 +0900)]
sched: ems: ontime: Allow to migrate to active core within coverage ratio.

Change-Id: I501963c396772bdd5051e7c69e8d642bcbdfac59
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
6 years agosched: ems: ontime: Don't allow to down-migrate heaviest task.
Daeyeong Lee [Mon, 14 May 2018 10:09:11 +0000 (19:09 +0900)]
sched: ems: ontime: Don't allow to down-migrate heaviest task.

Change-Id: I0daf9e82d69438155ce80c33a6a4709523462491
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
6 years agosched: ems: ontime: Use fit cpus when ontime migration.
Daeyeong Lee [Mon, 14 May 2018 01:52:03 +0000 (10:52 +0900)]
sched: ems: ontime: Use fit cpus when ontime migration.

Change-Id: Icea69935638628cb8dc41d38a47a9bc4046110b0
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
6 years agosched: ems: ontime: Use fit cpus when ontime task wake-up.
Daeyeong Lee [Fri, 18 May 2018 02:01:56 +0000 (11:01 +0900)]
sched: ems: ontime: Use fit cpus when ontime task wake-up.

Change-Id: I143735486cb003fea16d80144bb67ffaeb2bf01e
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
6 years agosched: ems: ontime: Add API to find fit cpus for heavy task.
Daeyeong Lee [Fri, 18 May 2018 01:58:30 +0000 (10:58 +0900)]
sched: ems: ontime: Add API to find fit cpus for heavy task.

Change-Id: I833b0c6997c40eb239836ba54385d3acb782b9ec
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
6 years agosched: ems: ontime: Modify message of ontime trace log.
Daeyeong Lee [Tue, 8 May 2018 11:05:16 +0000 (20:05 +0900)]
sched: ems: ontime: Modify message of ontime trace log.

Change-Id: I5560fd905fa03b77e4f609fc7d5d983b0405ac8c
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
6 years agosched: ems: ontime: Remove distinction between ontime and normal task.
Daeyeong Lee [Tue, 8 May 2018 10:56:09 +0000 (19:56 +0900)]
sched: ems: ontime: Remove distinction between ontime and normal task.

Change-Id: I343feff2d7db0d97d3813b570193f1ee8e3af93e
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
6 years agosched: ems: ontime: Remove min_residency at ontime condition.
Daeyeong Lee [Tue, 8 May 2018 10:34:41 +0000 (19:34 +0900)]
sched: ems: ontime: Remove min_residency at ontime condition.

Change-Id: I2263bff40f49ff9c9f112aac1db0546330c1447f
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
6 years agosched: ems: ontime: Modify sysfs node for new structure.
Daeyeong Lee [Tue, 8 May 2018 10:11:05 +0000 (19:11 +0900)]
sched: ems: ontime: Modify sysfs node for new structure.

Change-Id: Iaf7001616dde4f95f126e2021e4efd42b90fcc69
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
6 years agosched: ems: ontime: Modify ontime condition structure.
Daeyeong Lee [Tue, 8 May 2018 08:06:21 +0000 (17:06 +0900)]
sched: ems: ontime: Modify ontime condition structure.

- Modify the ontime condition to have each coregroup individually
- Use list_head struct instead of ontime_cond *next

Change-Id: I0e81c10af914a82d09ceedaf08975efdfb9f7e3a
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
6 years agosched: ems: ontime: Code clean-up for readability.
Daeyeong Lee [Tue, 8 May 2018 07:09:07 +0000 (16:09 +0900)]
sched: ems: ontime: Code clean-up for readability.

Change-Id: I93097d20d27b7899526b3569aaf99378ff6e1856
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
6 years agosched: ehmp: Check whether curr_task is perfer_perf when searching heavy task.
Daeyeong Lee [Tue, 8 May 2018 06:18:47 +0000 (15:18 +0900)]
sched: ehmp: Check whether curr_task is perfer_perf when searching heavy task.

Change-Id: Ic9cd388cc173fdca6c43ec9dec0b4db6c16df305
Signed-off-by: Daeyeong Lee <daeyeong.lee@samsung.com>
6 years agoRevert "sched: ems: Resolve prevent issue of ontime"
lakkyung.jung [Thu, 12 Jul 2018 01:46:06 +0000 (10:46 +0900)]
Revert "sched: ems: Resolve prevent issue of ontime"

This reverts commit 7798e78210369f2f81eb4d65cd1bb5e5f301efb8.

Change-Id: I17d22ed1d1d0d347e64f2d33ebc75e4ac31a00f4

6 years agosamsung: acme: skip unnecessary operation when frequency change.
Soohyun Kim [Fri, 8 Jun 2018 07:25:00 +0000 (16:25 +0900)]
samsung: acme: skip unnecessary operation when frequency change.

do nothing when target frequency is same with old frequency.

Change-Id: I9ca59361dd4849f27d2a1828f8dba28060ead5ef
Signed-off-by: Soohyun Kim <soohyuni.kim@samsung.com>
6 years agoarm64: dts: fill up the virtual cluster info
lakkyung.jung [Thu, 12 Jul 2018 01:44:27 +0000 (10:44 +0900)]
arm64: dts: fill up the virtual cluster info

Change-Id: I6bd3b3e43ca4af9de069bdc8e4f2f6b8d22c58c4
Signed-off-by: lakkyung.jung <lakkyung.jung@samsung.com>
6 years agosoc: samsung: cpupm: introduce virtual cluster info
Johnlay Park [Fri, 8 Jun 2018 14:45:20 +0000 (23:45 +0900)]
soc: samsung: cpupm: introduce virtual cluster info

to abstract the CPU topology

Change-Id: I4e13b6a45059743ceab143ba95826a35670b5357
Signed-off-by: Johnlay Park <jonglae.park@samsung.com>
6 years agocpufreq: acme: optimize the clock handling when buck off
Johnlay Park [Fri, 8 Jun 2018 11:41:00 +0000 (20:41 +0900)]
cpufreq: acme: optimize the clock handling when buck off

Change-Id: I661307b7c2045111166bbb269f675c8ba691841b
Signed-off-by: Johnlay Park <jonglae.park@samsung.com>
6 years agosched/rt: Fix rq->clock_update_flags < RQCF_ACT_SKIP warning
Davidlohr Bueso [Mon, 2 Apr 2018 16:49:54 +0000 (09:49 -0700)]
sched/rt: Fix rq->clock_update_flags < RQCF_ACT_SKIP warning

[ Upstream commit d29a20645d5e929aa7e8616f28e5d8e1c49263ec ]

While running rt-tests' pi_stress program I got the following splat:

  rq->clock_update_flags < RQCF_ACT_SKIP
  WARNING: CPU: 27 PID: 0 at kernel/sched/sched.h:960 assert_clock_updated.isra.38.part.39+0x13/0x20

  [...]

  <IRQ>
  enqueue_top_rt_rq+0xf4/0x150
  ? cpufreq_dbs_governor_start+0x170/0x170
  sched_rt_rq_enqueue+0x65/0x80
  sched_rt_period_timer+0x156/0x360
  ? sched_rt_rq_enqueue+0x80/0x80
  __hrtimer_run_queues+0xfa/0x260
  hrtimer_interrupt+0xcb/0x220
  smp_apic_timer_interrupt+0x62/0x120
  apic_timer_interrupt+0xf/0x20
  </IRQ>

  [...]

  do_idle+0x183/0x1e0
  cpu_startup_entry+0x5f/0x70
  start_secondary+0x192/0x1d0
  secondary_startup_64+0xa5/0xb0

We can get rid of it be the "traditional" means of adding an
update_rq_clock() call after acquiring the rq->lock in
do_sched_rt_period_timer().

The case for the RT task throttling (which this workload also hits)
can be ignored in that the skip_update call is actually bogus and
quite the contrary (the request bits are removed/reverted).

By setting RQCF_UPDATED we really don't care if the skip is happening
or not and will therefore make the assert_clock_updated() check happy.

Change-Id: I12014f33e599de5a35f39dd742555a2f99403b20
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: dave@stgolabs.net
Cc: linux-kernel@vger.kernel.org
Cc: rostedt@goodmis.org
Link: http://lkml.kernel.org/r/20180402164954.16255-1-dave@stgolabs.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
6 years agoUPSTREAM: sched/fair: Consider RT/IRQ pressure in capacity_spare_wake
Joel Fernandes [Thu, 9 Nov 2017 18:52:19 +0000 (10:52 -0800)]
UPSTREAM: sched/fair: Consider RT/IRQ pressure in capacity_spare_wake

capacity_spare_wake in the slow path influences choice of idlest groups,
as we search for groups with maximum spare capacity. In scenarios where
RT pressure is high, a sub optimal group can be chosen and hurt
performance of the task being woken up.

Several tests with results are included below to show improvements with
this change.

1) Hackbench on Pixel 2 Android device (4x4 ARM64 Octa core)
------------------------------------------------------------
Here we have RT activity running on big CPU cluster induced with rt-app,
and running hackbench in parallel. The RT tasks are bound to 4 CPUs on
the big cluster (cpu 4,5,6,7) and have 100ms periodicity with
runtime=20ms sleep=80ms.

Hackbench shows big benefit (30%) improvement when number of tasks is 8
and 32: Note: data is completion time in seconds (lower is better).
Number of loops for 8 and 16 tasks is 50000, and for 32 tasks its 20000.
+--------+-----+-------+-------------------+---------------------------+
| groups | fds | tasks | Without Patch     | With Patch                |
+--------+-----+-------+---------+---------+-----------------+---------+
|        |     |       | Mean    | Stdev   | Mean            | Stdev   |
|        |     |       +-------------------+-----------------+---------+
|      1 |   8 |     8 | 1.0534  | 0.13722 | 0.7293 (+30.7%) | 0.02653 |
|      2 |   8 |    16 | 1.6219  | 0.16631 | 1.6391 (-1%)    | 0.24001 |
|      4 |   8 |    32 | 1.2538  | 0.13086 | 1.1080 (+11.6%) | 0.16201 |
+--------+-----+-------+---------+---------+-----------------+---------+

2) Rohit ran barrier.c test (details below) with following improvements:
------------------------------------------------------------------------
This was Rohit's original use case for a patch he posted at [1] however
from his recent tests he showed my patch can replace his slow path
changes [1] and there's no need to selectively scan/skip CPUs in
find_idlest_group_cpu in the slow path to get the improvement he sees.

barrier.c (open_mp code) as a micro-benchmark. It does a number of
iterations and barrier sync at the end of each for loop.

Here barrier,c is running in along with ping on CPU 0 and 1 as:
'ping -l 10000 -q -s 10 -f hostX'

barrier.c can be found at:
http://www.spinics.net/lists/kernel/msg2506955.html

Following are the results for the iterations per second with this
micro-benchmark (higher is better), on a 44 core, 2 socket 88 Threads
Intel x86 machine:
+--------+------------------+---------------------------+
|Threads | Without patch    | With patch                |
|        |                  |                           |
+--------+--------+---------+-----------------+---------+
|        | Mean   | Std Dev | Mean            | Std Dev |
+--------+--------+---------+-----------------+---------+
|1       | 539.36 | 60.16   | 572.54 (+6.15%) | 40.95   |
|2       | 481.01 | 19.32   | 530.64 (+10.32%)| 56.16   |
|4       | 474.78 | 22.28   | 479.46 (+0.99%) | 18.89   |
|8       | 450.06 | 24.91   | 447.82 (-0.50%) | 12.36   |
|16      | 436.99 | 22.57   | 441.88 (+1.12%) | 7.39    |
|32      | 388.28 | 55.59   | 429.4  (+10.59%)| 31.14   |
|64      | 314.62 | 6.33    | 311.81 (-0.89%) | 11.99   |
+--------+--------+---------+-----------------+---------+

3) ping+hackbench test on bare-metal sever (Rohit ran this test)
----------------------------------------------------------------
Here hackbench is running in threaded mode along
with, running ping on CPU 0 and 1 as:
'ping -l 10000 -q -s 10 -f hostX'

This test is running on 2 socket, 20 core and 40 threads Intel x86
machine:
Number of loops is 10000 and runtime is in seconds (Lower is better).

+--------------+-----------------+--------------------------+
|Task Groups   | Without patch   |  With patch              |
|              +-------+---------+----------------+---------+
|(Groups of 40)| Mean  | Std Dev |  Mean          | Std Dev |
+--------------+-------+---------+----------------+---------+
|1             | 0.851 | 0.007   |  0.828 (+2.77%)| 0.032   |
|2             | 1.083 | 0.203   |  1.087 (-0.37%)| 0.246   |
|4             | 1.601 | 0.051   |  1.611 (-0.62%)| 0.055   |
|8             | 2.837 | 0.060   |  2.827 (+0.35%)| 0.031   |
|16            | 5.139 | 0.133   |  5.107 (+0.63%)| 0.085   |
|25            | 7.569 | 0.142   |  7.503 (+0.88%)| 0.143   |
+--------------+-------+---------+----------------+---------+

[1] https://patchwork.kernel.org/patch/9991635/

Matt Fleming also ran cyclictest and several different hackbench tests
on his test machines to santiy-check that the patch doesn't harm any
of his usecases.

Change-Id: I75826f9541ebe2e324fac2454790c5f5dff0d9b6
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Morten Ramussen <morten.rasmussen@arm.com>
Cc: Brendan Jackman <brendan.jackman@arm.com>
Tested-by: Rohit Jain <rohit.k.jain@oracle.com>
Tested-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Joel Fernandes <joelaf@google.com>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
6 years agocpufreq: schedutil: Avoid using invalid next_freq.
Rafael J. Wysocki [Wed, 9 May 2018 09:44:56 +0000 (11:44 +0200)]
cpufreq: schedutil: Avoid using invalid next_freq.

commit 97739501f207efe33145b918817f305b822987f8 upstream.

If the next_freq field of struct sugov_policy is set to UINT_MAX,
it shouldn't be used for updating the CPU frequency (this is a
special "invalid" value), but after commit b7eaf1aab9f8 (cpufreq:
schedutil: Avoid reducing frequency of busy CPUs prematurely) it
may be passed as the new frequency to sugov_update_commit() in
sugov_update_single().

Fix that by adding an extra check for the special UINT_MAX value
of next_freq to sugov_update_single().

Change-Id: I516963d8b0fda677e9d82290c0f3d4dc3cb6e477
Fixes: b7eaf1aab9f8 (cpufreq: schedutil: Avoid reducing frequency of busy CPUs prematurely)
Reported-by: Viresh Kumar <viresh.kumar@linaro.org>
Cc: 4.12+ <stable@vger.kernel.org> # 4.12+
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
6 years agomedia: scaler: remove unnecessary definition
Janghyuck Kim [Fri, 20 Jul 2018 04:14:27 +0000 (13:14 +0900)]
media: scaler: remove unnecessary definition

Change-Id: I39a83673bb2b5f9cfc740c97cf32163371102229
Signed-off-by: Janghyuck Kim <janghyuck.kim@samsung.com>
6 years ago[COMMON] media: scaler: support linear P010 format
Janghyuck Kim [Wed, 20 Jun 2018 04:32:45 +0000 (13:32 +0900)]
[COMMON] media: scaler: support linear P010 format

Change-Id: I0a647422214c8fc8b7f9cf60972265b27365fab6
Signed-off-by: Janghyuck Kim <janghyuck.kim@samsung.com>
6 years agoinclude: linux: add linear P010 format
Janghyuck Kim [Wed, 20 Jun 2018 04:32:12 +0000 (13:32 +0900)]
include: linux: add linear P010 format

Change-Id: I9dbe5a25b615704a6f7c566094850ca42646d3c3
Signed-off-by: Janghyuck Kim <janghyuck.kim@samsung.com>
6 years ago[COMMON] media: m2m1shot: support IO coherency
Janghyuck Kim [Wed, 2 May 2018 05:32:19 +0000 (14:32 +0900)]
[COMMON] media: m2m1shot: support IO coherency

IO coherency is supported by checking dma-coherent property and passed
IOMMU_CACHE property to ion_iovmm_map().

Change-Id: I5afb0849ebf11030e6c765e160e50dd973f8df79
Signed-off-by: Janghyuck Kim <janghyuck.kim@samsung.com>
6 years agomedia: videobuf2-core: add error handling for fence
Janghyuck Kim [Wed, 20 Jun 2018 07:46:08 +0000 (16:46 +0900)]
media: videobuf2-core: add error handling for fence

Vb2-core supports in-fence and out-fence for buffer synchronization.
However, in-fence might not be signaled by unexpected situation. Current
logic is waiting for fence signaling infinitely.

This patch added timer when in-fence without signaled is coming, and it
will be expired if callback for in-fence would not be called, which
means in-fence was not signaled during 1000 ms. This case is timeout and
buffer is passed into driver by buf_queue callback, however, it would be
marked with error status.

To support this error handling, vb2-core logic is changed to use
workqueue to avoid calling buf_queue callback in interrupt context.

Change-Id: Icd533c355fa83605e958c8058a676256a6940f14
Signed-off-by: Janghyuck Kim <janghyuck.kim@samsung.com>
6 years agomedia: videobuf2-core: fix wrong jump
Janghyuck Kim [Mon, 9 Jul 2018 04:51:23 +0000 (13:51 +0900)]
media: videobuf2-core: fix wrong jump

In error case handling, wrong jump was detected and fixed.

Change-Id: I76006903406354c69b232fa6eee22034bfae0101
Signed-off-by: Janghyuck Kim <janghyuck.kim@samsung.com>
6 years agomedia: scaler: improve alignment check
Janghyuck Kim [Fri, 6 Jul 2018 10:10:19 +0000 (19:10 +0900)]
media: scaler: improve alignment check

This patch improved alignment checking to make returning error instead
of the error interrupt after H/W operation.

Change-Id: I06df771d24fe5d1626bf4c33b6c1da1e1ecf3ba8
Signed-off-by: Janghyuck Kim <janghyuck.kim@samsung.com>
6 years agomedia: scaler: add job scheduling in buf_queue
Janghyuck Kim [Tue, 26 Jun 2018 11:43:53 +0000 (20:43 +0900)]
media: scaler: add job scheduling in buf_queue

buf_queue callback will be called when fence is signaled.
At that time, job should be tried to run because this buffer
queueing might satisfy the condition to work. Job would be never
triggered without this scheduling.

Change-Id: Iba5a25b1280e60b35f431266d289f1d5b2af5fdb
Signed-off-by: Janghyuck Kim <janghyuck.kim@samsung.com>