+config DISABLE_CPU_SCHED_DOMAIN_BALANCE
+ bool "(EXPERIMENTAL) Disable CPU level scheduler load-balancing"
+ help
+ Disables scheduler load-balancing at CPU sched domain level.
+
+config SCHED_HMP
+ bool "(EXPERIMENTAL) Heterogenous multiprocessor scheduling"
+ depends on DISABLE_CPU_SCHED_DOMAIN_BALANCE && SCHED_MC && FAIR_GROUP_SCHED && !SCHED_AUTOGROUP
+ help
+ Experimental scheduler optimizations for heterogeneous platforms.
+ Attempts to introspectively select task affinity to optimize power
+ and performance. Basic support for multiple (>2) cpu types is in place,
+ but it has only been tested with two types of cpus.
+ There is currently no support for migration of task groups, hence
+ !SCHED_AUTOGROUP. Furthermore, normal load-balancing must be disabled
+ between cpus of different type (DISABLE_CPU_SCHED_DOMAIN_BALANCE).
+
+config SCHED_HMP_PRIO_FILTER
+ bool "(EXPERIMENTAL) Filter HMP migrations by task priority"
+ depends on SCHED_HMP
+ help
+ Enables task priority based HMP migration filter. Any task with
+ a NICE value above the threshold will always be on low-power cpus
+ with less compute capacity.
+
+config SCHED_HMP_PRIO_FILTER_VAL
+ int "NICE priority threshold"
+ default 5
+ depends on SCHED_HMP_PRIO_FILTER
+
+config HMP_FAST_CPU_MASK
+ string "HMP scheduler fast CPU mask"
+ depends on SCHED_HMP
+ help
+ Leave empty to use device tree information.
+ Specify the cpuids of the fast CPUs in the system as a list string,
+ e.g. cpuid 0+1 should be specified as 0-1.
+
+config HMP_SLOW_CPU_MASK
+ string "HMP scheduler slow CPU mask"
+ depends on SCHED_HMP
+ help
+ Leave empty to use device tree information.
+ Specify the cpuids of the slow CPUs in the system as a list string,
+ e.g. cpuid 0+1 should be specified as 0-1.
+
+config HMP_VARIABLE_SCALE
+ bool "Allows changing the load tracking scale through sysfs"
+ depends on SCHED_HMP
+ help
+ When turned on, this option exports the thresholds and load average
+ period value for the load tracking patches through sysfs.
+ The values can be modified to change the rate of load accumulation
+ and the thresholds used for HMP migration.
+ The load_avg_period_ms is the time in ms to reach a load average of
+ 0.5 for an idle task of 0 load average ratio that start a busy loop.
+ The up_threshold and down_threshold is the value to go to a faster
+ CPU or to go back to a slower cpu.
+ The {up,down}_threshold are devided by 1024 before being compared
+ to the load average.
+ For examples, with load_avg_period_ms = 128 and up_threshold = 512,
+ a running task with a load of 0 will be migrated to a bigger CPU after
+ 128ms, because after 128ms its load_avg_ratio is 0.5 and the real
+ up_threshold is 0.5.
+ This patch has the same behavior as changing the Y of the load
+ average computation to
+ (1002/1024)^(LOAD_AVG_PERIOD/load_avg_period_ms)
+ but it remove intermadiate overflows in computation.
+
+config MET_SCHED_HMP
+ bool "(EXPERIMENTAL) MET SCHED HMP Info"
+ depends on SCHED_HMP_ENHANCEMENT
+ depends on HMP_TRACER
+ help
+ MET SCHED HMP Info
+
+config HMP_FREQUENCY_INVARIANT_SCALE
+ bool "(EXPERIMENTAL) Frequency-Invariant Tracked Load for HMP"
+ depends on HMP_VARIABLE_SCALE && CPU_FREQ
+ depends on !ARCH_SCALE_INVARIANT_CPU_CAPACITY
+ help
+ Scales the current load contribution in line with the frequency
+ of the CPU that the task was executed on.
+ In this version, we use a simple linear scale derived from the
+ maximum frequency reported by CPUFreq.
+ Restricting tracked load to be scaled by the CPU's frequency
+ represents the consumption of possible compute capacity
+ (rather than consumption of actual instantaneous capacity as
+ normal) and allows the HMP migration's simple threshold
+ migration strategy to interact more predictably with CPUFreq's
+ asynchronous compute capacity changes.
+
+config SCHED_HMP_ENHANCEMENT
+ bool "(EXPERIMENTAL) HMP Ennhancement"
+ depends on SCHED_HMP
+ help
+ HMP Ennhancement
+
+config HMP_TRACER
+ bool "(EXPERIMENTAL) Profile HMP scheduler"
+ depends on SCHED_HMP_ENHANCEMENT
+ help
+ Profile HMP scheduler
+
+config HMP_DYNAMIC_THRESHOLD
+ bool "(EXPERIMENTAL) Dynamically adjust task migration threshold"
+ depends on SCHED_HMP_ENHANCEMENT
+ help
+ Dynamically adjust task migration threshold according to current system load
+
+config HMP_GLOBAL_BALANCE
+ bool "(EXPERIMENTAL) Enhance HMP global load balance"
+ depends on SCHED_HMP_ENHANCEMENT
+ help
+ Enhance HMP global load balance
+
+config HMP_TASK_ASSIGNMENT
+ bool "(EXPERIMENTAL) Enhance HMP task assignment"
+ depends on SCHED_HMP_ENHANCEMENT
+ help
+ Enhance HMP task assignment
+
+config HMP_DISCARD_CFS_SELECTION_RESULT
+ bool "(EXPERIMENTAL) Discard CFS runqueue selection result"
+ depends on SCHED_HMP_ENHANCEMENT && HMP_TASK_ASSIGNMENT
+ help
+ Discard CFS runqueue selection result even if only one cluster exists
+
+config HMP_PACK_SMALL_TASK
+ bool "(EXPERIMENTAL) Packing Small Tasks"
+ depends on SCHED_HMP_ENHANCEMENT
+ help
+ This option enables Packing Small Tasks
+
+config HMP_PACK_BUDDY_INFO
+ bool "(EXPERIMENTAL) Packing Small Tasks Buddy Information Log"
+ depends on SCHED_HMP_ENHANCEMENT && HMP_PACK_SMALL_TASK
+ help
+ This option enables Packing Small Tasks Buddy Information Log
+
+config HMP_LAZY_BALANCE
+ bool "(EXPERIMENTAL) Lazy Balance"
+ depends on SCHED_HMP_ENHANCEMENT && HMP_PACK_SMALL_TASK
+ help
+ This option enables Lazy Balance
+
+config HMP_POWER_AWARE_CONTROLLER
+ bool "(EXPERIMENTAL) Power-aware Scheduler for b.L MP Controller"
+ depends on SCHED_HMP_ENHANCEMENT && HMP_PACK_SMALL_TASK && HMP_LAZY_BALANCE
+ help
+ Power-aware scheduler for b.L MP controller and status interface
+
+config HEVTASK_INTERFACE
+ bool "task status interface"
+ help
+ The option provide an interface to show task status
+
+config ARCH_SCALE_INVARIANT_CPU_CAPACITY
+ bool "(EXPERIMENTAL) Scale-Invariant CPU Compute Capacity Recording"
+ depends on CPU_FREQ
+ help
+ Provides a new measure of maximum and instantaneous CPU compute
+ capacity, derived from a table of relative compute performance
+ for each core type present in the system. The table is an
+ estimate and specific core performance may be different for
+ any particular workload. The measure includes the relative
+ performance and a linear scale of current to maximum frequency
+ such that at maximum frequency (as expressed in the DTB) the
+ reported compute capacity will be equal to the estimated
+ performance from the table. Values range between 0 and 1023 where
+ 1023 is the highest capacity available in the system.
+