Matthias Kaehlcke [Mon, 8 May 2017 22:55:05 +0000 (15:55 -0700)]
jiffies.h: declare jiffies and jiffies_64 with ____cacheline_aligned_in_smp
commit
7c30f352c852bae2715ad65ac4a38ca9af7d7696 upstream.
jiffies_64 is defined in kernel/time/timer.c with
____cacheline_aligned_in_smp, however this macro is not part of the
declaration of jiffies and jiffies_64 in jiffies.h.
As a result clang generates the following warning:
kernel/time/timer.c:57:26: error: section does not match previous declaration [-Werror,-Wsection]
__visible u64 jiffies_64 __cacheline_aligned_in_smp = INITIAL_JIFFIES;
^
include/linux/cache.h:39:36: note: expanded from macro '__cacheline_aligned_in_smp'
^
include/linux/cache.h:34:4: note: expanded from macro '__cacheline_aligned'
__section__(".data..cacheline_aligned")))
^
include/linux/jiffies.h:77:12: note: previous attribute is here
extern u64 __jiffy_data jiffies_64;
^
include/linux/jiffies.h:70:38: note: expanded from macro '__jiffy_data'
Link: http://lkml.kernel.org/r/20170403190200.70273-1-mka@chromium.org
Signed-off-by: Matthias Kaehlcke <mka@chromium.org>
Cc: "Jason A . Donenfeld" <Jason@zx2c4.com>
Cc: Grant Grundler <grundler@chromium.org>
Cc: Michael Davidson <md@google.com>
Cc: Greg Hackmann <ghackmann@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nathan Chancellor <natechancellor@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Matthias Kaehlcke [Wed, 12 Apr 2017 18:20:29 +0000 (11:20 -0700)]
cpumask: Add helper cpumask_available()
commit
f7e30f01a9e221067bb4b579e3cfc25cd2617467 upstream.
With CONFIG_CPUMASK_OFFSTACK=y cpumask_var_t is a struct cpumask
pointer, otherwise a struct cpumask array with a single element.
Some code dealing with cpumasks needs to validate that a cpumask_var_t
is not a NULL pointer when CONFIG_CPUMASK_OFFSTACK=y. This is typically
done by performing the check always, regardless of the underlying type
of cpumask_var_t. This works in both cases, however clang raises a
warning like this when CONFIG_CPUMASK_OFFSTACK=n:
kernel/irq/manage.c:839:28: error: address of array
'desc->irq_common_data.affinity' will always evaluate to 'true'
[-Werror,-Wpointer-bool-conversion]
Add the inline helper cpumask_available() which only performs the
pointer check if CONFIG_CPUMASK_OFFSTACK=y.
Signed-off-by: Matthias Kaehlcke <mka@chromium.org>
Cc: Grant Grundler <grundler@chromium.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Greg Hackmann <ghackmann@google.com>
Cc: Michael Davidson <md@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Link: http://lkml.kernel.org/r/20170412182030.83657-1-mka@chromium.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Nathan Chancellor <natechancellor@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Matthias Kaehlcke [Wed, 12 Apr 2017 18:20:30 +0000 (11:20 -0700)]
genirq: Use cpumask_available() for check of cpumask variable
commit
d170fe7dd992b313d4851ae5ab77ee7a51ed8c72 upstream.
This fixes the following clang warning when CONFIG_CPUMASK_OFFSTACK=n:
kernel/irq/manage.c:839:28: error: address of array
'desc->irq_common_data.affinity' will always evaluate to 'true'
[-Werror,-Wpointer-bool-conversion]
Signed-off-by: Matthias Kaehlcke <mka@chromium.org>
Cc: Grant Grundler <grundler@chromium.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Greg Hackmann <ghackmann@google.com>
Cc: Michael Davidson <md@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Link: http://lkml.kernel.org/r/20170412182030.83657-2-mka@chromium.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Nathan Chancellor <natechancellor@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Matthias Kaehlcke [Thu, 16 Mar 2017 22:26:52 +0000 (15:26 -0700)]
selinux: Remove unnecessary check of array base in selinux_set_mapping()
commit
342e91578eb6909529bc7095964cd44b9c057c4e upstream.
'perms' will never be NULL since it isn't a plain pointer but an array
of u32 values.
This fixes the following warning when building with clang:
security/selinux/ss/services.c:158:16: error: address of array
'p_in->perms' will always evaluate to 'true'
[-Werror,-Wpointer-bool-conversion]
while (p_in->perms && p_in->perms[k]) {
Signed-off-by: Matthias Kaehlcke <mka@chromium.org>
Signed-off-by: Paul Moore <paul@paul-moore.com>
Cc: Nathan Chancellor <natechancellor@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Nick Desaulniers [Thu, 3 Aug 2017 18:03:58 +0000 (11:03 -0700)]
arm64: avoid overflow in VA_START and PAGE_OFFSET
commit
82cd588052815eb4146f9f7c5347ca5e32c56360 upstream.
The bitmask used to define these values produces overflow, as seen by
this compiler warning:
arch/arm64/kernel/head.S:47:8: warning:
integer overflow in preprocessor expression
#elif (PAGE_OFFSET & 0x1fffff) != 0
^~~~~~~~~~~
arch/arm64/include/asm/memory.h:52:46: note:
expanded from macro 'PAGE_OFFSET'
#define PAGE_OFFSET (UL(0xffffffffffffffff) << (VA_BITS -
1))
~~~~~~~~~~~~~~~~~~ ^
It would be preferrable to use GENMASK_ULL() instead, but it's not set
up to be used from assembly (the UL() macro token pastes UL suffixes
when not included in assembly sources).
Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Suggested-by: Yury Norov <ynorov@caviumnetworks.com>
Suggested-by: Matthias Kaehlcke <mka@chromium.org>
Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
[natechancellor: KIMAGE_VADDR doesn't exist]
Signed-off-by: Nathan Chancellor <natechancellor@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Michael Benedict [Tue, 27 Aug 2019 12:16:30 +0000 (22:16 +1000)]
drivers: acpi: silence pointer bool conversion warning
Signed-off-by: Michael Benedict <michaelbt@live.com>
Michael Benedict [Tue, 27 Aug 2019 12:14:36 +0000 (22:14 +1000)]
mm: silence enum conversion warnings
Signed-off-by: Michael Benedict <michaelbt@live.com>
Michael Benedict [Sat, 8 Jun 2019 17:44:13 +0000 (03:44 +1000)]
soc: samsung: cal-if: add volatile keyword to prevent optimization by clang
Signed-off-by: Michael Benedict <michaelbt@live.com>
Michael Benedict [Sat, 8 Jun 2019 17:27:42 +0000 (03:27 +1000)]
ion: silence logical-not-parentheses warning
Signed-off-by: Michael Benedict <michaelbt@live.com>
Michael Benedict [Mon, 22 Apr 2019 04:39:31 +0000 (14:39 +1000)]
dpu: use usleep_range instead of udelay
Signed-off-by: Michael Benedict <michaelbt@live.com>
Michael Benedict [Tue, 26 Mar 2019 08:41:29 +0000 (01:41 -0700)]
mptcp: disable incompatible-pointer-types warnings
Signed-off-by: Michael Benedict <michaelbt@live.com>
Michael Benedict [Thu, 25 Apr 2019 15:12:44 +0000 (01:12 +1000)]
cal-if: disable sizeof-pointer-div warnings
Signed-off-by: Michael Benedict <michaelbt@live.com>
Michael Benedict [Mon, 3 Jun 2019 12:38:12 +0000 (22:38 +1000)]
sec_debug: silence null-dereference warnings
Signed-off-by: Michael Benedict <michaelbt@live.com>
Michael Benedict [Mon, 3 Jun 2019 11:47:48 +0000 (21:47 +1000)]
decon: merge all into decon_data_path
Signed-off-by: Michael Benedict <michaelbt@live.com>
Michael Benedict [Mon, 3 Jun 2019 11:32:16 +0000 (21:32 +1000)]
drivers: usb: silence enum-conversion warnings
Signed-off-by: Michael Benedict <michaelbt@live.com>
Michael Benedict [Mon, 3 Jun 2019 11:28:29 +0000 (21:28 +1000)]
drivers: thermal: silence enum-conversion warnings
Signed-off-by: Michael Benedict <michaelbt@live.com>
Michael Benedict [Mon, 3 Jun 2019 11:24:45 +0000 (21:24 +1000)]
drivers: fimc-is2: silence enum-conversion warnings
Signed-off-by: Michael Benedict <michaelbt@live.com>
Michael Benedict [Mon, 3 Jun 2019 11:09:52 +0000 (21:09 +1000)]
drivers: phy: silence enum-conversion warning
Signed-off-by: Michael Benedict <michaelbt@live.com>
Michael Benedict [Mon, 22 Apr 2019 05:13:48 +0000 (15:13 +1000)]
s5p_mfc_enc_internal: replace min/max value to fix shift-overflow warning
Signed-off-by: Michael Benedict <michaelbt@live.com>
Michael Benedict [Mon, 3 Jun 2019 08:51:33 +0000 (18:51 +1000)]
drivers: modem_v1: silence logical-not-parentheses warning
Signed-off-by: Michael Benedict <michaelbt@live.com>
Michael Benedict [Mon, 3 Jun 2019 08:47:12 +0000 (18:47 +1000)]
drivers: modem_v1: silence enum-conversion warnings
Signed-off-by: Michael Benedict <michaelbt@live.com>
Michael Benedict [Thu, 25 Apr 2019 15:21:58 +0000 (01:21 +1000)]
drivers: touchscreen: fix strncat-size warnings
S10 sec_ts as reference
Signed-off-by: Michael Benedict <michaelbt@live.com>
Michael Benedict [Mon, 3 Jun 2019 08:32:11 +0000 (18:32 +1000)]
drivers: vfs8xxx: silence enum-conversion warning
Signed-off-by: Michael Benedict <michaelbt@live.com>
Michael Benedict [Mon, 3 Jun 2019 08:21:08 +0000 (18:21 +1000)]
battery_v2: silence enum-conversion warnings
Signed-off-by: Michael Benedict <michaelbt@live.com>
Nick Desaulniers [Mon, 10 Jul 2017 22:47:26 +0000 (15:47 -0700)]
UPSTREAM: mm/zsmalloc.c: fix -Wunneeded-internal-declaration warning
is_first_page() is only called from the macro VM_BUG_ON_PAGE() which is
only compiled in as a runtime check when CONFIG_DEBUG_VM is set,
otherwise is checked at compile time and not actually compiled in.
Fixes the following warning, found with Clang:
mm/zsmalloc.c:472:12: warning: function 'is_first_page' is not needed and will not be emitted [-Wunneeded-internal-declaration]
static int is_first_page(struct page *page)
^
Link: http://lkml.kernel.org/r/20170524053859.29059-1-nick.desaulniers@gmail.com
Signed-off-by: Nick Desaulniers <nick.desaulniers@gmail.com>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
3457f4147675108aa83f9f33c136f06bb9f8518f)
Signed-off-by: Nathan Chancellor <natechancellor@gmail.com>
Matthias Kaehlcke [Fri, 8 Sep 2017 23:13:02 +0000 (16:13 -0700)]
UPSTREAM: mm/zsmalloc.c: change stat type parameter to int
zs_stat_inc/dec/get() uses enum zs_stat_type for the stat type, however
some callers pass an enum fullness_group value. Change the type to int to
reflect the actual use of the functions and get rid of 'enum-conversion'
warnings
Link: http://lkml.kernel.org/r/20170731175000.56538-1-mka@chromium.org
Signed-off-by: Matthias Kaehlcke <mka@chromium.org>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: Doug Anderson <dianders@chromium.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
3eb95feac113d8ebad5b7b5189a65efcbd95a749)
Signed-off-by: Nathan Chancellor <natechancellor@gmail.com>
Matthias Kaehlcke [Mon, 30 Oct 2017 18:08:16 +0000 (11:08 -0700)]
BACKPORT: sched/sysctl: Fix attributes of some extern declarations
The definition of sysctl_sched_migration_cost, sysctl_sched_nr_migrate
and sysctl_sched_time_avg includes the attribute const_debug. This
attribute is not part of the extern declaration of these variables in
include/linux/sched/sysctl.h, while it is in kernel/sched/sched.h,
and as a result Clang generates warnings like this:
kernel/sched/sched.h:1618:33: warning: section attribute is specified on redeclared variable [-Wsection]
extern const_debug unsigned int sysctl_sched_time_avg;
^
./include/linux/sched/sysctl.h:42:21: note: previous declaration is here
extern unsigned int sysctl_sched_time_avg;
The header only declares the variables when CONFIG_SCHED_DEBUG is defined,
therefore it is not necessary to duplicate the definition of const_debug.
Instead we can use the attribute __read_mostly, which is the expansion of
const_debug when CONFIG_SCHED_DEBUG=y is set.
Signed-off-by: Matthias Kaehlcke <mka@chromium.org>
Reviewed-by: Nick Desaulniers <nick.desaulniers@gmail.com>
Cc: Douglas Anderson <dianders@chromium.org>
Cc: Guenter Roeck <groeck@chromium.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Shile Zhang <shile.zhang@nokia.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20171030180816.170850-1-mka@chromium.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit
a9903f04e0a4ea522d959c2f287cdf0ab029e324)
Signed-off-by: Nathan Chancellor <natechancellor@gmail.com>
Signed-off-by: kdrag0n <dragon@khronodragon.com>
Yaroslav Furman [Fri, 22 Jun 2018 18:57:44 +0000 (21:57 +0300)]
decon_core: fix parentheses-equality warning
Signed-off-by: Yaroslav Furman <yaro330@gmail.com>
Yaroslav Furman [Fri, 22 Jun 2018 18:00:27 +0000 (21:00 +0300)]
ufs-exynos: don't use boolean statement in switch-case
Convert it to int because it's not really a boolean.
Signed-off-by: Yaroslav Furman <yaro330@gmail.com>
Yaroslav Furman [Fri, 22 Jun 2018 18:10:23 +0000 (21:10 +0300)]
vs4l: remove __u[32/16/8] and __s32 declarations
They are already defined. Fixes redefinition warnings.
Signed-off-by: Yaroslav Furman <yaro330@gmail.com>
Yaroslav Furman [Fri, 22 Jun 2018 18:55:01 +0000 (21:55 +0300)]
s5p_mfc_[dec/enc]_ops: fix parentheses-equality warnings
Signed-off-by: Yaroslav Furman <yaro330@gmail.com>
Yaroslav Furman [Fri, 22 Jun 2018 17:57:13 +0000 (20:57 +0300)]
ufshcd: fix parentheses-equality warning
Signed-off-by: Yaroslav Furman <yaro330@gmail.com>
Yaroslav Furman [Fri, 22 Jun 2018 17:41:38 +0000 (20:41 +0300)]
arm64: include: asm: initialize a few ret(-s)
Fixes -Wsometimes-uninitialized that just
fill up build log.
Signed-off-by: Yaroslav Furman <yaro330@gmail.com>
Yaroslav Furman [Fri, 22 Jun 2018 18:51:01 +0000 (21:51 +0300)]
drv2624: fix shift overflow warning
Signed-off-by: Yaroslav Furman <yaro330@gmail.com>
Jeongtae Park [Mon, 26 Mar 2018 02:39:24 +0000 (11:39 +0900)]
[COMMON] fimc-is2: fix a typo (SYNNY -> SUNNY)
Change-Id: Ieeb22b673ae6d04e41fdb66d042fff5c70491dac
Signed-off-by: Jeongtae Park <jtp.park@samsung.com>
Yaroslav Furman [Fri, 22 Jun 2018 18:30:06 +0000 (21:30 +0300)]
sec_ts_fn: fix logical-not-parentheses warning
Signed-off-by: Yaroslav Furman <yaro330@gmail.com>
Yaroslav Furman [Fri, 22 Jun 2018 18:08:21 +0000 (21:08 +0300)]
[cpu/gpu]_cooling: move get_static_t declaration in thermal.h
To avoid redefenition warning.
Signed-off-by: Yaroslav Furman <yaro330@gmail.com>
Yaroslav Furman [Fri, 22 Jun 2018 17:17:52 +0000 (20:17 +0300)]
fs: ecryptfs: fix parentheses-equality warning
Signed-off-by: Yaroslav Furman <yaro330@gmail.com>
Sodagudi Prasad [Tue, 6 Feb 2018 23:46:51 +0000 (15:46 -0800)]
kbuild: clang: disable unused variable warnings only when constant
commit
0a5f41767444cc3b4fc5573921ab914b4f78baaa upstream.
Currently, GCC disables -Wunused-const-variable, but not
-Wunused-variable, so warns unused variables if they are
non-constant.
While, Clang does not warn unused variables at all regardless of
the const qualifier because -Wno-unused-const-variable is implied
by the stronger option -Wno-unused-variable.
Disable -Wunused-const-variable instead of -Wunused-variable so that
GCC and Clang work in the same way.
Signed-off-by: Prasad Sodagudi <psodagud@codeaurora.org>
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Nathan Chancellor <natechancellor@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Nick Desaulniers [Sat, 7 Oct 2017 20:23:23 +0000 (13:23 -0700)]
kbuild: clang: remove crufty HOSTCFLAGS
commit
df16aaac26e92e97ab7234d3f93c953466adc4b5 upstream.
When compiling with `make CC=clang HOSTCC=clang`, I was seeing warnings
that clang did not recognize -fno-delete-null-pointer-checks for HOSTCC
targets. These were added in commit
61163efae020 ("kbuild: LLVMLinux:
Add Kbuild support for building kernel with Clang").
Clang does not support -fno-delete-null-pointer-checks, so adding it to
HOSTCFLAGS if HOSTCC is clang does not make sense.
It's not clear why the other warnings were disabled, and just for
HOSTCFLAGS, but I can remove them, add -Werror to HOSTCFLAGS and compile
with clang just fine.
Suggested-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Nick Desaulniers <nick.desaulniers@gmail.com>
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
[nc: Adjust context]
Signed-off-by: Nathan Chancellor <natechancellor@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Michael Benedict [Fri, 30 Aug 2019 14:24:16 +0000 (00:24 +1000)]
source: N950F DSH1
Signed-off-by: Michael Benedict <michaelbt@live.com>
Michael Benedict [Fri, 30 Aug 2019 14:15:15 +0000 (00:15 +1000)]
source: G95xF DSH5
Signed-off-by: Michael Benedict <michaelbt@live.com>
Sergey Senozhatsky [Thu, 5 Apr 2018 23:24:43 +0000 (16:24 -0700)]
BACKPORT: zsmalloc: introduce zs_huge_class_size()
Patch series "zsmalloc/zram: drop zram's max_zpage_size", v3.
ZRAM's max_zpage_size is a bad thing. It forces zsmalloc to store
normal objects as huge ones, which results in bigger zsmalloc memory
usage. Drop it and use actual zsmalloc huge-class value when decide if
the object is huge or not.
This patch (of 2):
Not every object can be share its zspage with other objects, e.g. when
the object is as big as zspage or nearly as big a zspage. For such
objects zsmalloc has a so called huge class - every object which belongs
to huge class consumes the entire zspage (which consists of a physical
page). On x86_64, PAGE_SHIFT 12 box, the first non-huge class size is
3264, so starting down from size 3264, objects can share page(-s) and
thus minimize memory wastage.
ZRAM, however, has its own statically defined watermark for huge
objects, namely "3 * PAGE_SIZE / 4 = 3072", and forcibly stores every
object larger than this watermark (3072) as a PAGE_SIZE object, in other
words, to a huge class, while zsmalloc can keep some of those objects in
non-huge classes. This results in increased memory consumption.
zsmalloc knows better if the object is huge or not. Introduce
zs_huge_class_size() function which tells if the given object can be
stored in one of non-huge classes or not. This will let us to drop
ZRAM's huge object watermark and fully rely on zsmalloc when we decide
if the object is huge.
[sergey.senozhatsky.work@gmail.com: add pool param to zs_huge_class_size()]
Link: http://lkml.kernel.org/r/20180314081833.1096-2-sergey.senozhatsky@gmail.com
Link: http://lkml.kernel.org/r/20180306070639.7389-2-sergey.senozhatsky@gmail.com
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
010b495e2fa32353d0ef6aa70a8169e5ef617a15)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
113183619
Change-Id: Ic35f8c1ec75f0b78bf2d83729b6aedd2999f25c8
Minchan Kim [Fri, 23 Nov 2018 06:30:06 +0000 (15:30 +0900)]
zram: close udev startup race condition as default groups
commit
fef912bf860e upstream.
commit
98af4d4df889 upstream.
I got a report from Howard Chen that he saw zram and sysfs race(ie,
zram block device file is created but sysfs for it isn't yet)
when he tried to create new zram devices via hotadd knob.
v4.20 kernel fixes it by [1, 2] but it's too large size to merge
into -stable so this patch fixes the problem by registering defualt
group by Greg KH's approach[3].
This patch should be applied to every stable tree [3.16+] currently
existing from kernel.org because the problem was introduced at 2.6.37
by [4].
[1]
fef912bf860e, block: genhd: add 'groups' argument to device_add_disk
[2]
98af4d4df889, zram: register default groups with device_add_disk()
[3] http://kroah.com/log/blog/2013/06/26/how-to-create-a-sysfs-file-correctly/
[4]
33863c21e69e9, Staging: zram: Replace ioctls with sysfs interface
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Hannes Reinecke <hare@suse.com>
Tested-by: Howard Chen <howardsoc@google.com>
Signed-off-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Peter Kalauskas [Thu, 8 Nov 2018 19:03:13 +0000 (11:03 -0800)]
ANDROID: zram: set comp_len to PAGE_SIZE when page is huge
This bug was introduced when two patches were applied out of order.
* zram: drop max_zpage_size and use zs_huge_class_size()
* zram: mark incompressible page as ZRAM_HUGE
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
119260394
Change-Id: I437d35c8d23c15237ad9c2d5bd7f99d7bff42872
Sergey Senozhatsky [Thu, 5 Apr 2018 23:24:47 +0000 (16:24 -0700)]
BACKPORT: zram: drop max_zpage_size and use zs_huge_class_size()
Remove ZRAM's enforced "huge object" value and use zsmalloc huge-class
watermark instead, which makes more sense.
TEST
- I used a 1G zram device, LZO compression back-end, original
data set size was 444MB. Looking at zsmalloc classes stats the
test ended up to be pretty fair.
BASE ZRAM/ZSMALLOC
=====================
zram mm_stat
498978816 191482495 199831552 0
199831552 15634 0
zsmalloc classes
class size almost_full almost_empty obj_allocated obj_used pages_used pages_per_zspage freeable
...
151 2448 0 0 1240 1240 744 3 0
168 2720 0 0 4200 4200 2800 2 0
190 3072 0 0 10100 10100 7575 3 0
202 3264 0 0 380 380 304 4 0
254 4096 0 0 10620 10620 10620 1 0
Total 7 46 106982 106187 48787 0
PATCHED ZRAM/ZSMALLOC
=====================
zram mm_stat
498978816 182579184 194248704 0
194248704 15628 0
zsmalloc classes
class size almost_full almost_empty obj_allocated obj_used pages_used pages_per_zspage freeable
...
151 2448 0 0 1240 1240 744 3 0
168 2720 0 0 4200 4200 2800 2 0
190 3072 0 0 10100 10100 7575 3 0
202 3264 0 0 7180 7180 5744 4 0
254 4096 0 0 3820 3820 3820 1 0
Total 8 45 106959 106193 47424 0
As we can see, we reduced the number of objects stored in class-4096,
because a huge number of objects which we previously forcibly stored in
class-4096 now stored in non-huge class-3264. This results in lower
memory consumption:
- zsmalloc now uses 47424 physical pages, which is less than 48787 pages
zsmalloc used before.
- objects that we store in class-3264 share zspages. That's why overall
the number of pages that both class-4096 and class-3264 consumed went
down from 10924 to 9564.
[sergey.senozhatsky.work@gmail.com: add pool param to zs_huge_class_size()]
Link: http://lkml.kernel.org/r/20180314081833.1096-3-sergey.senozhatsky@gmail.com
Link: http://lkml.kernel.org/r/20180306070639.7389-3-sergey.senozhatsky@gmail.com
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
60f5921a9a4f126e081318bd6bb2bc2798b7bba8)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
113183619
Change-Id: I1d3ede25543e99a24802ad03f68995f33aaf79b5
Peter Kalauskas [Wed, 22 Aug 2018 04:54:02 +0000 (21:54 -0700)]
UPSTREAM: drivers/block/zram/zram_drv.c: fix bug storing backing_dev
The call to strlcpy in backing_dev_store is incorrect. It should take
the size of the destination buffer instead of the size of the source
buffer. Additionally, ignore the newline character (\n) when reading
the new file_name buffer. This makes it possible to set the backing_dev
as follows:
echo /dev/sdX > /sys/block/zram0/backing_dev
The reason it worked before was the fact that strlcpy() copies 'len - 1'
bytes, which is strlen(buf) - 1 in our case, so it accidentally didn't
copy the trailing new line symbol. Which also means that "echo -n
/dev/sdX" most likely was broken.
Signed-off-by: Peter Kalauskas <peskal@google.com>
Link: http://lkml.kernel.org/r/20180813061623.GC64836@rodete-desktop-imager.corp.google.com
Acked-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: <stable@vger.kernel.org> [4.14+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
c8bd134a4bddafe5917d163eea73873932c15e83)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: I0a0d602b61169ae9adc8f89914ce4e30cc10e191
Minchan Kim [Fri, 8 Jun 2018 00:05:49 +0000 (17:05 -0700)]
BACKPORT: zram: introduce zram memory tracking
zRam as swap is useful for small memory device. However, swap means
those pages on zram are mostly cold pages due to VM's LRU algorithm.
Especially, once init data for application are touched for launching,
they tend to be not accessed any more and finally swapped out. zRAM can
store such cold pages as compressed form but it's pointless to keep in
memory. Better idea is app developers free them directly rather than
remaining them on heap.
This patch tell us last access time of each block of zram via "cat
/sys/kernel/debug/zram/zram0/block_state".
The output is as follows,
300 75.033841 .wh
301 63.806904 s..
302 63.806919 ..h
First column is zram's block index and 3rh one represents symbol (s:
same page w: written page to backing store h: huge page) of the block
state. Second column represents usec time unit of the block was last
accessed. So above example means the 300th block is accessed at
75.033851 second and it was huge so it was written to the backing store.
Admin can leverage this information to catch cold|incompressible pages
of process with *pagemap* once part of heaps are swapped out.
I used the feature a few years ago to find memory hoggers in userspace
to notify them what memory they have wasted without touch for a long
time. With it, they could reduce unnecessary memory space. However, at
that time, I hacked up zram for the feature but now I need the feature
again so I decided it would be better to upstream rather than keeping it
alone. I hope I submit the userspace tool to use the feature soon.
[akpm@linux-foundation.org: fix i386 printk warning]
[minchan@kernel.org: use ktime_get_boottime() instead of sched_clock()]
Link: http://lkml.kernel.org/r/20180420063525.GA253739@rodete-desktop-imager.corp.google.com
[akpm@linux-foundation.org: documentation tweak]
[akpm@linux-foundation.org: fix i386 printk warning]
[minchan@kernel.org: fix compile warning]
Link: http://lkml.kernel.org/r/20180508104849.GA8209@rodete-desktop-imager.corp.google.com
[rdunlap@infradead.org: fix printk formats]
Link: http://lkml.kernel.org/r/3652ccb1-96ef-0b0b-05d1-f661d7733dcc@infradead.org
Link: http://lkml.kernel.org/r/20180416090946.63057-5-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
c0265342bff4fcaa2cdf13f4596244c18d4a7ae5)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: I932447d33d1b6af78ae6463b494006c725e5e38c
Minchan Kim [Fri, 8 Jun 2018 00:05:45 +0000 (17:05 -0700)]
BACKPORT: zram: record accessed second
zRam as swap is useful for small memory device. However, swap means
those pages on zram are mostly cold pages due to VM's LRU algorithm.
Especially, once init data for application are touched for launching,
they tend to be not accessed any more and finally swapped out. zRAM can
store such cold pages as compressed form but it's pointless to keep in
memory. Better idea is app developers free them directly rather than
remaining them on heap.
This patch records last access time of each block of zram so that With
upcoming zram memory tracking, it could help userspace developers to
reduce memory footprint.
Link: http://lkml.kernel.org/r/20180416090946.63057-4-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
d7eac6b6e1838ef1a1400df4ec55daa34bbc855e)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: I5b217d3cd4da57e548196658e0824d65a0cad631
Minchan Kim [Fri, 8 Jun 2018 00:05:42 +0000 (17:05 -0700)]
BACKPORT: zram: mark incompressible page as ZRAM_HUGE
Mark incompressible pages so that we could investigate who is the owner
of the incompressible pages once the page is swapped out via using
upcoming zram memory tracker feature.
With it, we could prevent such pages to be swapped out by using mlock.
Otherwise we might remove them.
This patch exposes new stat for huge pages via mm_stat.
Link: http://lkml.kernel.org/r/20180416090946.63057-3-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
89e85bce4b02edb7408aebf69d5d1a6692a05f4f)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: If1b7b2d6ea6672a575ffc3d70c2c8b58ecafd0d7
Minchan Kim [Fri, 8 Jun 2018 00:05:39 +0000 (17:05 -0700)]
UPSTREAM: zram: correct flag name of ZRAM_ACCESS
Patch series "zram memory tracking", v5.
zRam as swap is useful for small memory device. However, swap means
those pages on zram are mostly cold pages due to VM's LRU algorithm.
Especially, once init data for application are touched for launching,
they tend to be not accessed any more and finally swapped out. zRAM can
store such cold pages as compressed form but it's pointless to keep in
memory. As well, it's pointless to store incompressible pages to zram
so better idea is app developers manages them directly like free or
mlock rather than remaining them on heap.
This patch provides a debugfs /sys/kernel/debug/zram/zram0/block_state
to represent each block's state so admin can investigate what memory is
cold|incompressible|same page with using pagemap once the pages are
swapped out.
The output is as follows:
300 75.033841 .wh
301 63.806904 s..
302 63.806919 ..h
First column is zram's block index and 3rh one represents symbol (s:
same page w: written page to backing store h: huge page) of the block
state. Second column represents usec time unit of the block was last
accessed. So above example means the 300th block is accessed at
75.033851 second and it was huge so it was written to the backing store.
This patch (of 4):
ZRAM_ACCESS is used for locking a slot of zram so correct the name. It
is also not a common flag to indicate status of the block so move the
declare position on top of the flag. Lastly, let's move the function to
the top of source code to be able to use it easily without forward
declaration.
Link: http://lkml.kernel.org/r/20180416090946.63057-2-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
c4d6c4cc7bfd5ecc18548420b7fb9440cf8416ae)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: I037a22a739fb4005918eb668d10e8be354a1524f
Bart Van Assche [Wed, 28 Feb 2018 18:15:30 +0000 (10:15 -0800)]
UPSTREAM: zram: Delete gendisk before cleaning up the request queue
Remove the disk, partition and bdi sysfs attributes before cleaning up
the request queue associated with the disk.
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Joseph Qi <joseph.qi@linux.alibaba.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <ngupta@vflare.org>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
(cherry picked from commit
392db38058eb47250a9d0cc737af37e78a7e443d)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: Ifbcb6e03fee764054dc9a371c00b95547e4de745
Colin Ian King [Thu, 16 Nov 2017 01:37:08 +0000 (17:37 -0800)]
UPSTREAM: drivers/block/zram/zram_drv.c: make zram_page_end_io() static
zram_page_end_io() is local to the source and does not need to be in
global scope, so make it static.
Cleans up sparse warning:
symbol 'zram_page_end_io' was not declared. Should it be static?
Link: http://lkml.kernel.org/r/20171016173336.20320-1-colin.king@canonical.com
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Minchan Kim <minchan@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
384bc41fc064bd8b12b7081aa3e81d26f3407045)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: Ie0f250e580bc1dd16e963b5dbe5bdc429fb4cd65
Minchan Kim [Thu, 16 Nov 2017 01:32:56 +0000 (17:32 -0800)]
BACKPORT: zram: set BDI_CAP_STABLE_WRITES once
With fast swap storage, the platform wants to use swap more aggressively
and swap-in is crucial to application latency.
The rw_page() based synchronous devices like zram, pmem and btt are such
fast storage. When I profile swapin performance with zram lz4
decompress test, S/W overhead is more than 70%. Maybe, it would be
bigger in nvdimm.
This patchset reduces swap-in latency by skipping swapcache if the swap
device is a synchronous device like a rw_page() based device.
It enhances by 45% my swapin test (5G sequential swapin, no readahead)
from 2.41sec to 1.64sec.
This patch (of 4):
Commit
19b7ccf8651d ("block: get rid of blk_integrity_revalidate()")
fixed a weird thing (i.e., reset BDI_CAP_STABLE_WRITES flag
unconditionally whenever revalidat_disk is called) so zram doesn't need
to reset the flag any more when revalidating the bdev. Instead, set the
flag just once when the zram device is created.
It shouldn't change any behavior.
Link: http://lkml.kernel.org/r/1505886205-9671-2-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Ilya Dryomov <idryomov@gmail.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Hugh Dickins <hughd@google.com>
Cc: Huang Ying <ying.huang@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
e447a0151f7ce8dd884fea48279274bd64434c29)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: If41edc4871ed470f050bbf4d51a24fe5c0e18738
Minchan Kim [Tue, 3 Oct 2017 23:15:19 +0000 (16:15 -0700)]
UPSTREAM: zram: fix null dereference of handle
In testing I found handle passed to zs_map_object in __zram_bvec_read is
NULL so eh kernel goes oops in pin_object().
The reason is there is no routine to check the slot's freeing after
getting the slot's lock. This patch fixes it.
[minchan@kernel.org: v2]
Link: http://lkml.kernel.org/r/1505887347-10881-1-git-send-email-minchan@kernel.org
Link: http://lkml.kernel.org/r/1505788488-26723-1-git-send-email-minchan@kernel.org
Fixes:
1f7319c74275 ("zram: partial IO refactoring")
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
ae94264ed4b0cf7cd887947650db4c69acb62072)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: I0ff4a8c2f1fcd0ee39511985809b58bf94b2d44c
Minchan Kim [Wed, 6 Sep 2017 23:20:10 +0000 (16:20 -0700)]
UPSTREAM: zram: add config and doc file for writeback feature
This patch adds document and kconfig for using of writeback feature.
Link: http://lkml.kernel.org/r/1498459987-24562-10-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Cc: Juneho Choi <juno.choi@lge.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
5a47074f0279421778f97b1b1e75686696a5f42a)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: I9ec2230739a6468a4481a90a9c9f966badf9ac48
Minchan Kim [Wed, 6 Sep 2017 23:20:07 +0000 (16:20 -0700)]
BACKPORT: zram: read page from backing device
This patch enables read IO from backing device. For the feature, it
implements two IO read functions to transfer data from backing storage.
One is asynchronous IO function and other is synchronous one.
A reason I need synchrnous IO is due to partial write which need to
complete read IO before the overwriting partial data.
We can make the partial IO's case asynchronous, too but at the moment, I
don't feel adding more complexity to support such rare use cases so want
to go with simple.
[xieyisheng1@huawei.com: read_from_bdev_async(): return 1 to avoid call page_endio() in zram_rw_page()]
Link: http://lkml.kernel.org/r/1502707447-6944-1-git-send-email-xieyisheng1@huawei.com
Link: http://lkml.kernel.org/r/1498459987-24562-9-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Yisheng Xie <xieyisheng1@huawei.com>
Cc: Juneho Choi <juno.choi@lge.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
8e654f8fbff52ac483fb69957222853d7e2fc588)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: Ia82f5fc4697aacc723a336e4dad4e7bc56a1bdb9
Minchan Kim [Wed, 6 Sep 2017 23:20:03 +0000 (16:20 -0700)]
BACKPORT: zram: write incompressible pages to backing device
This patch enables write IO to transfer data to backing device. For
that, it implements write_to_bdev function which creates new bio and
chaining with parent bio to make the parent bio asynchrnous.
For rw_page which don't have parent bio, it submit owned bio and handle
IO completion by zram_page_end_io.
Also, this patch defines new flag ZRAM_WB to mark written page for later
read IO.
[xieyisheng1@huawei.com: fix typo in comment]
Link: http://lkml.kernel.org/r/1502707447-6944-2-git-send-email-xieyisheng1@huawei.com
Link: http://lkml.kernel.org/r/1498459987-24562-8-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Yisheng Xie <xieyisheng1@huawei.com>
Cc: Juneho Choi <juno.choi@lge.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
db8ffbd4e7634cc537c8d32e73e7ce0f06248645)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: Ie675efd6c3ec04a151443f1cd0bf798d4847710f
Minchan Kim [Wed, 6 Sep 2017 23:20:00 +0000 (16:20 -0700)]
BACKPORT: zram: identify asynchronous IO's return value
For upcoming asynchronous IO like writeback, zram_rw_page should be
aware of that whether requested IO was completed or submitted
successfully, otherwise error.
For the goal, zram_bvec_rw has three return values.
-errno: returns error number
0: IO request is done synchronously
1: IO request is issued successfully.
Link: http://lkml.kernel.org/r/1498459987-24562-7-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Cc: Juneho Choi <juno.choi@lge.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
ae85a8075c5b025b9d503554ddc480a346a24536)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: Id6e764b3eacfebdca2f46050648a49fc5f276b2c
Minchan Kim [Wed, 6 Sep 2017 23:19:57 +0000 (16:19 -0700)]
BACKPORT: zram: add free space management in backing device
With backing device, zram needs management of free space of backing
device.
This patch adds bitmap logic to manage free space which is very naive.
However, it would be simple enough as considering uncompressible pages's
frequenty in zram.
Link: http://lkml.kernel.org/r/1498459987-24562-6-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Cc: Juneho Choi <juno.choi@lge.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
1363d4662a0d28dfdb81ef426c88c9a8dbf7c338)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: I37dc98b40bfddceb9eb6d989ca30683dbf89210c
Minchan Kim [Wed, 6 Sep 2017 23:19:54 +0000 (16:19 -0700)]
UPSTREAM: zram: add interface to specif backing device
For writeback feature, user should set up backing device before the zram
working.
This patch enables the interface via /sys/block/zramX/backing_dev.
Currently, it supports block device only but it could be enhanced for
file as well.
Link: http://lkml.kernel.org/r/1498459987-24562-5-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Cc: Juneho Choi <juno.choi@lge.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
013bf95a83ec760a2afc37fabd6bf13a9cdae205)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: I4bbf12ed7496d476bddd574e756bac5c8a838089
Minchan Kim [Wed, 6 Sep 2017 23:19:50 +0000 (16:19 -0700)]
UPSTREAM: zram: rename zram_decompress_page to __zram_bvec_read
zram_decompress_page naming is not proper because it doesn't decompress
if page was dedup hit or stored with compression.
Use more abstract term and consistent with write path function
__zram_bvec_write.
Link: http://lkml.kernel.org/r/1498459987-24562-4-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Cc: Juneho Choi <juno.choi@lge.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
693dc1ce25b8c8fa33f930d47cd8f926eeb90812)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: Ia7c948f4b78601458b7ebc23ab345d4bc0a8d4a8
Minchan Kim [Wed, 6 Sep 2017 23:19:47 +0000 (16:19 -0700)]
UPSTREAM: zram: inline zram_compress
zram_compress does several things, compress, entry alloc and check
limitation. I did for just readbility but it hurts modulization.:(
So this patch removes zram_compress functions and inline it in
__zram_bvec_write for upcoming patches.
Link: http://lkml.kernel.org/r/1498459987-24562-3-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Cc: Juneho Choi <juno.choi@lge.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
97ec7c8bd5d029b2c3e40355c1204197094e9ba1)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: Ibb37d77168edd0b01d0b9820e431c73cc3c2ff20
Minchan Kim [Wed, 6 Sep 2017 23:19:44 +0000 (16:19 -0700)]
UPSTREAM: zram: clean up duplicated codes in __zram_bvec_write
Patch series "writeback incompressible pages to storage", v1.
zRam is useful for memory saving with compressible pages but sometime,
workload can be changed and system has lots of incompressible pages
which is very harmful for zram.
This patch supports writeback feature of zram so admin can set up a
block device and with it, zram can save the memory via writing out the
incompressile pages once it found it's incompressible pages (1/4 comp
ratio) instead of keeping the page in memory.
[1-3] is just clean up and [4-8] is step by step feature enablement.
[4-8] is logically not bisectable(ie, logical unit separation)
although I tried to compiled out without breaking but I think it would
be better to review.
This patch (of 9):
__zram_bvec_write has some of duplicated logic for zram meta data
handling of same_page|compressed_page. This patch aims to clean it up
without behavior change.
[xieyisheng1@huawei.com: fix compr_data_size stat]
Link: http://lkml.kernel.org/r/1502707447-6944-1-git-send-email-xieyisheng1@huawei.com
Link: http://lkml.kernel.org/r/1496019048-27016-1-git-send-email-minchan@kernel.org
Link: http://lkml.kernel.org/r/1498459987-24562-2-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Yisheng Xie <xieyisheng1@huawei.com>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Juneho Choi <juno.choi@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
4ebbe7f7fc99260afd51759e35dbfdd6010dc697)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: I3fa150c869a66ff289712b956924ecb361864a2e
Sergey Senozhatsky [Thu, 16 Nov 2017 01:33:49 +0000 (17:33 -0800)]
UPSTREAM: zram: add zstd to the supported algorithms list
Add ZSTD to the list of supported compression algorithms.
ZRAM fio perf test:
LZO DEFLATE ZSTD
WRITE: (2180MB/s) (77.2MB/s) (1429MB/s)
WRITE: (1617MB/s) (77.7MB/s) (1202MB/s)
READ: (426MB/s) (595MB/s) (1181MB/s)
READ: (422MB/s) (572MB/s) (1020MB/s)
READ: (318MB/s) (67.8MB/s) (563MB/s)
WRITE: (318MB/s) (67.9MB/s) (564MB/s)
READ: (336MB/s) (68.3MB/s) (583MB/s)
WRITE: (335MB/s) (68.2MB/s) (582MB/s)
WRITE: (3441MB/s) (152MB/s) (2141MB/s)
WRITE: (2507MB/s) (147MB/s) (1888MB/s)
READ: (801MB/s) (1146MB/s) (1890MB/s)
READ: (767MB/s) (1096MB/s) (2073MB/s)
READ: (621MB/s) (126MB/s) (1009MB/s)
WRITE: (621MB/s) (126MB/s) (1009MB/s)
READ: (656MB/s) (125MB/s) (1075MB/s)
WRITE: (657MB/s) (126MB/s) (1077MB/s)
WRITE: (4772MB/s) (225MB/s) (3394MB/s)
WRITE: (3905MB/s) (211MB/s) (2939MB/s)
READ: (1216MB/s) (1608MB/s) (3218MB/s)
READ: (1159MB/s) (1431MB/s) (2981MB/s)
READ: (906MB/s) (156MB/s) (1457MB/s)
WRITE: (907MB/s) (156MB/s) (1458MB/s)
READ: (953MB/s) (158MB/s) (1595MB/s)
WRITE: (952MB/s) (157MB/s) (1593MB/s)
WRITE: (6036MB/s) (265MB/s) (4469MB/s)
WRITE: (5059MB/s) (263MB/s) (3951MB/s)
READ: (1618MB/s) (2066MB/s) (4276MB/s)
READ: (1573MB/s) (1942MB/s) (3830MB/s)
READ: (1202MB/s) (227MB/s) (1971MB/s)
WRITE: (1200MB/s) (227MB/s) (1968MB/s)
READ: (1265MB/s) (226MB/s) (2116MB/s)
WRITE: (1264MB/s) (226MB/s) (2114MB/s)
WRITE: (5339MB/s) (233MB/s) (3781MB/s)
WRITE: (4298MB/s) (234MB/s) (3276MB/s)
READ: (1626MB/s) (2048MB/s) (4081MB/s)
READ: (1567MB/s) (1929MB/s) (3758MB/s)
READ: (1174MB/s) (205MB/s) (1747MB/s)
WRITE: (1173MB/s) (204MB/s) (1746MB/s)
READ: (1214MB/s) (208MB/s) (1890MB/s)
WRITE: (1215MB/s) (208MB/s) (1892MB/s)
WRITE: (5666MB/s) (270MB/s) (4338MB/s)
WRITE: (4828MB/s) (267MB/s) (3772MB/s)
READ: (1803MB/s) (2058MB/s) (4946MB/s)
READ: (1805MB/s) (2156MB/s) (4711MB/s)
READ: (1334MB/s) (235MB/s) (2135MB/s)
WRITE: (1335MB/s) (235MB/s) (2137MB/s)
READ: (1364MB/s) (236MB/s) (2268MB/s)
WRITE: (1365MB/s) (237MB/s) (2270MB/s)
WRITE: (5474MB/s) (270MB/s) (4300MB/s)
WRITE: (4666MB/s) (266MB/s) (3817MB/s)
READ: (2022MB/s) (2319MB/s) (5472MB/s)
READ: (1924MB/s) (2260MB/s) (5031MB/s)
READ: (1369MB/s) (242MB/s) (2153MB/s)
WRITE: (1370MB/s) (242MB/s) (2155MB/s)
READ: (1499MB/s) (246MB/s) (2310MB/s)
WRITE: (1497MB/s) (246MB/s) (2307MB/s)
WRITE: (5558MB/s) (273MB/s) (4439MB/s)
WRITE: (4763MB/s) (271MB/s) (3918MB/s)
READ: (2201MB/s) (2599MB/s) (6062MB/s)
READ: (2105MB/s) (2463MB/s) (5413MB/s)
READ: (1490MB/s) (252MB/s) (2238MB/s)
WRITE: (1488MB/s) (252MB/s) (2236MB/s)
READ: (1566MB/s) (254MB/s) (2434MB/s)
WRITE: (1568MB/s) (254MB/s) (2437MB/s)
WRITE: (5120MB/s) (264MB/s) (4035MB/s)
WRITE: (4531MB/s) (267MB/s) (3740MB/s)
READ: (1940MB/s) (2258MB/s) (4986MB/s)
READ: (2024MB/s) (2387MB/s) (4871MB/s)
READ: (1343MB/s) (246MB/s) (2038MB/s)
WRITE: (1342MB/s) (246MB/s) (2037MB/s)
READ: (1553MB/s) (238MB/s) (2243MB/s)
WRITE: (1552MB/s) (238MB/s) (2242MB/s)
WRITE: (5345MB/s) (271MB/s) (3988MB/s)
WRITE: (4750MB/s) (254MB/s) (3668MB/s)
READ: (1876MB/s) (2363MB/s) (5150MB/s)
READ: (1990MB/s) (2256MB/s) (5080MB/s)
READ: (1355MB/s) (250MB/s) (2019MB/s)
WRITE: (1356MB/s) (251MB/s) (2020MB/s)
READ: (1490MB/s) (252MB/s) (2202MB/s)
WRITE: (1488MB/s) (252MB/s) (2199MB/s)
jobs1 perfstat
instructions 52,065,555,710 ( 0.79) 855,731,114,587 ( 2.64) 54,280,709,944 ( 1.40)
branches 14,020,427,116 ( 725.847) 101,733,449,582 (1074.521) 11,170,591,067 ( 992.869)
branch-misses 22,626,174 ( 0.16%) 274,197,885 ( 0.27%) 25,915,805 ( 0.23%)
jobs2 perfstat
instructions 103,633,110,402 ( 0.75) 1,710,822,100,914 ( 2.59) 107,879,874,104 ( 1.28)
branches 27,931,237,282 ( 679.203) 203,298,267,479 (1037.326) 22,185,350,842 ( 884.427)
branch-misses 46,103,811 ( 0.17%) 533,747,204 ( 0.26%) 49,682,483 ( 0.22%)
jobs3 perfstat
instructions 154,857,283,657 ( 0.76) 2,565,748,974,197 ( 2.57) 161,515,435,813 ( 1.31)
branches 41,759,490,355 ( 670.529) 304,905,605,277 ( 978.765) 33,215,805,907 ( 888.003)
branch-misses 74,263,293 ( 0.18%) 759,746,240 ( 0.25%) 76,841,196 ( 0.23%)
jobs4 perfstat
instructions 206,215,849,076 ( 0.75) 3,420,169,460,897 ( 2.60) 215,003,061,664 ( 1.31)
branches 55,632,141,739 ( 666.501) 406,394,977,433 ( 927.241) 44,214,322,251 ( 883.532)
branch-misses 102,287,788 ( 0.18%) 1,098,617,314 ( 0.27%) 103,891,040 ( 0.23%)
jobs5 perfstat
instructions 258,711,315,588 ( 0.67) 4,275,657,533,244 ( 2.23) 269,332,235,685 ( 1.08)
branches 69,802,821,166 ( 588.823) 507,996,211,252 ( 797.036) 55,450,846,129 ( 735.095)
branch-misses 129,217,214 ( 0.19%) 1,243,284,991 ( 0.24%) 173,512,278 ( 0.31%)
jobs6 perfstat
instructions 312,796,166,008 ( 0.61) 5,133,896,344,660 ( 2.02) 323,658,769,588 ( 1.04)
branches 84,372,488,583 ( 520.541) 610,310,494,402 ( 697.642) 66,683,292,992 ( 693.939)
branch-misses 159,438,978 ( 0.19%) 1,396,368,563 ( 0.23%) 174,406,934 ( 0.26%)
jobs7 perfstat
instructions 363,211,372,930 ( 0.56) 5,988,205,600,879 ( 1.75) 377,824,674,156 ( 0.93)
branches 98,057,013,765 ( 463.117) 711,841,255,974 ( 598.762) 77,879,009,954 ( 600.443)
branch-misses 199,513,153 ( 0.20%) 1,507,651,077 ( 0.21%) 248,203,369 ( 0.32%)
jobs8 perfstat
instructions 413,960,354,615 ( 0.52) 6,842,918,558,378 ( 1.45) 431,938,486,581 ( 0.83)
branches 111,812,574,884 ( 414.224) 813,299,084,518 ( 491.173) 89,062,699,827 ( 517.795)
branch-misses 233,584,845 ( 0.21%) 1,531,593,921 ( 0.19%) 286,818,489 ( 0.32%)
jobs9 perfstat
instructions 465,976,220,300 ( 0.53) 7,698,467,237,372 ( 1.47) 486,352,600,321 ( 0.84)
branches 125,931,456,162 ( 424.063) 915,207,005,715 ( 498.192) 100,370,404,090 ( 517.439)
branch-misses 256,992,445 ( 0.20%) 1,782,809,816 ( 0.19%) 345,239,380 ( 0.34%)
jobs10 perfstat
instructions 517,406,372,715 ( 0.53) 8,553,527,312,900 ( 1.48) 540,732,653,094 ( 0.84)
branches 139,839,780,676 ( 427.732) 1,016,737,699,389 ( 503.172) 111,696,557,638 ( 516.750)
branch-misses 259,595,561 ( 0.19%) 1,952,570,279 ( 0.19%) 357,818,661 ( 0.32%)
seconds elapsed 20.
630411534 96.
084546565 12.
743373571
seconds elapsed 22.
292627625 100.
984155001 14.
407413560
seconds elapsed 22.
396016966 110.
344880848 14.
032201392
seconds elapsed 22.
517330949 113.
351459170 14.
243074935
seconds elapsed 28.
548305104 156.
515193765 19.
159286861
seconds elapsed 30.
453538116 164.
559937678 19.
362492717
seconds elapsed 33.
467108086 188.
486827481 21.
492612173
seconds elapsed 35.
617727591 209.
602677783 23.
256422492
seconds elapsed 42.
584239509 243.
959902566 28.
458540338
seconds elapsed 47.
683632526 269.
635248851 31.
542404137
Over all, ZSTD has slower WRITE, but much faster READ (perhaps
a static compression buffer used during the test helped ZSTD a
lot), which results in faster test results.
Memory consumption (zram mm_stat file):
zram LZO mm_stat
mm_stat (jobs1):
2147483648 23068672 33558528 0
33558528 0 0
mm_stat (jobs2):
2147483648 23068672 33558528 0
33558528 0 0
mm_stat (jobs3):
2147483648 23068672 33558528 0
33562624 0 0
mm_stat (jobs4):
2147483648 23068672 33558528 0
33558528 0 0
mm_stat (jobs5):
2147483648 23068672 33558528 0
33558528 0 0
mm_stat (jobs6):
2147483648 23068672 33558528 0
33562624 0 0
mm_stat (jobs7):
2147483648 23068672 33558528 0
33566720 0 0
mm_stat (jobs8):
2147483648 23068672 33558528 0
33558528 0 0
mm_stat (jobs9):
2147483648 23068672 33558528 0
33558528 0 0
mm_stat (jobs10):
2147483648 23068672 33558528 0
33562624 0 0
zram DEFLATE mm_stat
mm_stat (jobs1):
2147483648 16252928 25178112 0
25178112 0 0
mm_stat (jobs2):
2147483648 16252928 25178112 0
25178112 0 0
mm_stat (jobs3):
2147483648 16252928 25178112 0
25178112 0 0
mm_stat (jobs4):
2147483648 16252928 25178112 0
25178112 0 0
mm_stat (jobs5):
2147483648 16252928 25178112 0
25178112 0 0
mm_stat (jobs6):
2147483648 16252928 25178112 0
25178112 0 0
mm_stat (jobs7):
2147483648 16252928 25178112 0
25190400 0 0
mm_stat (jobs8):
2147483648 16252928 25178112 0
25190400 0 0
mm_stat (jobs9):
2147483648 16252928 25178112 0
25178112 0 0
mm_stat (jobs10):
2147483648 16252928 25178112 0
25178112 0 0
zram ZSTD mm_stat
mm_stat (jobs1):
2147483648 11010048 16781312 0
16781312 0 0
mm_stat (jobs2):
2147483648 11010048 16781312 0
16781312 0 0
mm_stat (jobs3):
2147483648 11010048 16781312 0
16785408 0 0
mm_stat (jobs4):
2147483648 11010048 16781312 0
16781312 0 0
mm_stat (jobs5):
2147483648 11010048 16781312 0
16781312 0 0
mm_stat (jobs6):
2147483648 11010048 16781312 0
16781312 0 0
mm_stat (jobs7):
2147483648 11010048 16781312 0
16781312 0 0
mm_stat (jobs8):
2147483648 11010048 16781312 0
16781312 0 0
mm_stat (jobs9):
2147483648 11010048 16781312 0
16785408 0 0
mm_stat (jobs10):
2147483648 11010048 16781312 0
16781312 0 0
==================================================================================
Official benchmarks [1]:
Compressor name Ratio Compression Decompress.
zstd 1.1.3 -1 2.877 430 MB/s 1110 MB/s
zlib 1.2.8 -1 2.743 110 MB/s 400 MB/s
brotli 0.5.2 -0 2.708 400 MB/s 430 MB/s
quicklz 1.5.0 -1 2.238 550 MB/s 710 MB/s
lzo1x 2.09 -1 2.108 650 MB/s 830 MB/s
lz4 1.7.5 2.101 720 MB/s 3600 MB/s
snappy 1.1.3 2.091 500 MB/s 1650 MB/s
lzf 3.6 -1 2.077 400 MB/s 860 MB/s
Minchan said:
: I did test with my sample data and compared zstd with deflate. zstd's
: compress ratio is lower a little bit but compression speed is much faster
: 3 times more and decompress speed is too 2 times more. With different
: data, it is different but overall, zstd would be better for speed at the
: cost of a little lower compress ratio(about 5%) so I believe it's worth to
: replace deflate.
[1] https://github.com/facebook/zstd
Link: http://lkml.kernel.org/r/20170912050005.3247-1-sergey.senozhatsky@gmail.com
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Tested-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
5ef3a8b12556d7fcba81edc74e9d85b029615ae0)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: Ieb6239dab92f560fa654d9cc29b1e266f2e44050
Nick Terrell [Thu, 10 Aug 2017 02:35:53 +0000 (19:35 -0700)]
UPSTREAM: lib: Add zstd modules
Add zstd compression and decompression kernel modules.
zstd offers a wide varity of compression speed and quality trade-offs.
It can compress at speeds approaching lz4, and quality approaching lzma.
zstd decompressions at speeds more than twice as fast as zlib, and
decompression speed remains roughly the same across all compression levels.
The code was ported from the upstream zstd source repository. The
`linux/zstd.h` header was modified to match linux kernel style.
The cross-platform and allocation code was stripped out. Instead zstd
requires the caller to pass a preallocated workspace. The source files
were clang-formatted [1] to match the Linux Kernel style as much as
possible. Otherwise, the code was unmodified. We would like to avoid
as much further manual modification to the source code as possible, so it
will be easier to keep the kernel zstd up to date.
I benchmarked zstd compression as a special character device. I ran zstd
and zlib compression at several levels, as well as performing no
compression, which measure the time spent copying the data to kernel space.
Data is passed to the compresser 4096 B at a time. The benchmark file is
located in the upstream zstd source repository under
`contrib/linux-kernel/zstd_compress_test.c` [2].
I ran the benchmarks on a Ubuntu 14.04 VM with 2 cores and 4 GiB of RAM.
The VM is running on a MacBook Pro with a 3.1 GHz Intel Core i7 processor,
16 GB of RAM, and a SSD. I benchmarked using `silesia.tar` [3], which is
211,988,480 B large. Run the following commands for the benchmark:
sudo modprobe zstd_compress_test
sudo mknod zstd_compress_test c 245 0
sudo cp silesia.tar zstd_compress_test
The time is reported by the time of the userland `cp`.
The MB/s is computed with
1,536,217,008 B / time(buffer size, hash)
which includes the time to copy from userland.
The Adjusted MB/s is computed with
1,536,217,088 B / (time(buffer size, hash) - time(buffer size, none)).
The memory reported is the amount of memory the compressor requests.
| Method | Size (B) | Time (s) | Ratio | MB/s | Adj MB/s | Mem (MB) |
|----------|----------|----------|-------|---------|----------|----------|
| none |
11988480 | 0.100 | 1 | 2119.88 | - | - |
| zstd -1 |
73645762 | 1.044 | 2.878 | 203.05 | 224.56 | 1.23 |
| zstd -3 |
66988878 | 1.761 | 3.165 | 120.38 | 127.63 | 2.47 |
| zstd -5 |
65001259 | 2.563 | 3.261 | 82.71 | 86.07 | 2.86 |
| zstd -10 |
60165346 | 13.242 | 3.523 | 16.01 | 16.13 | 13.22 |
| zstd -15 |
58009756 | 47.601 | 3.654 | 4.45 | 4.46 | 21.61 |
| zstd -19 |
54014593 | 102.835 | 3.925 | 2.06 | 2.06 | 60.15 |
| zlib -1 |
77260026 | 2.895 | 2.744 | 73.23 | 75.85 | 0.27 |
| zlib -3 |
72972206 | 4.116 | 2.905 | 51.50 | 52.79 | 0.27 |
| zlib -6 |
68190360 | 9.633 | 3.109 | 22.01 | 22.24 | 0.27 |
| zlib -9 |
67613382 | 22.554 | 3.135 | 9.40 | 9.44 | 0.27 |
I benchmarked zstd decompression using the same method on the same machine.
The benchmark file is located in the upstream zstd repo under
`contrib/linux-kernel/zstd_decompress_test.c` [4]. The memory reported is
the amount of memory required to decompress data compressed with the given
compression level. If you know the maximum size of your input, you can
reduce the memory usage of decompression irrespective of the compression
level.
| Method | Time (s) | MB/s | Adjusted MB/s | Memory (MB) |
|----------|----------|---------|---------------|-------------|
| none | 0.025 | 8479.54 | - | - |
| zstd -1 | 0.358 | 592.15 | 636.60 | 0.84 |
| zstd -3 | 0.396 | 535.32 | 571.40 | 1.46 |
| zstd -5 | 0.396 | 535.32 | 571.40 | 1.46 |
| zstd -10 | 0.374 | 566.81 | 607.42 | 2.51 |
| zstd -15 | 0.379 | 559.34 | 598.84 | 4.61 |
| zstd -19 | 0.412 | 514.54 | 547.77 | 8.80 |
| zlib -1 | 0.940 | 225.52 | 231.68 | 0.04 |
| zlib -3 | 0.883 | 240.08 | 247.07 | 0.04 |
| zlib -6 | 0.844 | 251.17 | 258.84 | 0.04 |
| zlib -9 | 0.837 | 253.27 | 287.64 | 0.04 |
Tested in userland using the test-suite in the zstd repo under
`contrib/linux-kernel/test/UserlandTest.cpp` [5] by mocking the kernel
functions. Fuzz tested using libfuzzer [6] with the fuzz harnesses under
`contrib/linux-kernel/test/{RoundTripCrash.c,DecompressCrash.c}` [7] [8]
with ASAN, UBSAN, and MSAN. Additionaly, it was tested while testing the
BtrFS and SquashFS patches coming next.
[1] https://clang.llvm.org/docs/ClangFormat.html
[2] https://github.com/facebook/zstd/blob/dev/contrib/linux-kernel/zstd_compress_test.c
[3] http://sun.aei.polsl.pl/~sdeor/index.php?page=silesia
[4] https://github.com/facebook/zstd/blob/dev/contrib/linux-kernel/zstd_decompress_test.c
[5] https://github.com/facebook/zstd/blob/dev/contrib/linux-kernel/test/UserlandTest.cpp
[6] http://llvm.org/docs/LibFuzzer.html
[7] https://github.com/facebook/zstd/blob/dev/contrib/linux-kernel/test/RoundTripCrash.c
[8] https://github.com/facebook/zstd/blob/dev/contrib/linux-kernel/test/DecompressCrash.c
zstd source repository: https://github.com/facebook/zstd
Signed-off-by: Nick Terrell <terrelln@fb.com>
Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit
73f3d1b48f5069d46ba48aa28c2898dc93185560)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: I47b9d43a8065b2b5a1362f8458065f0811cf70b9
Nick Terrell [Fri, 4 Aug 2017 20:19:17 +0000 (13:19 -0700)]
UPSTREAM: lib: Add xxhash module
Adds xxhash kernel module with xxh32 and xxh64 hashes. xxhash is an
extremely fast non-cryptographic hash algorithm for checksumming.
The zstd compression and decompression modules added in the next patch
require xxhash. I extracted it out from zstd since it is useful on its
own. I copied the code from the upstream XXHash source repository and
translated it into kernel style. I ran benchmarks and tests in the kernel
and tests in userland.
I benchmarked xxhash as a special character device. I ran in four modes,
no-op, xxh32, xxh64, and crc32. The no-op mode simply copies the data to
kernel space and ignores it. The xxh32, xxh64, and crc32 modes compute
hashes on the copied data. I also ran it with four different buffer sizes.
The benchmark file is located in the upstream zstd source repository under
`contrib/linux-kernel/xxhash_test.c` [1].
I ran the benchmarks on a Ubuntu 14.04 VM with 2 cores and 4 GiB of RAM.
The VM is running on a MacBook Pro with a 3.1 GHz Intel Core i7 processor,
16 GB of RAM, and a SSD. I benchmarked using the file `filesystem.squashfs`
from `ubuntu-16.10-desktop-amd64.iso`, which is 1,536,217,088 B large.
Run the following commands for the benchmark:
modprobe xxhash_test
mknod xxhash_test c 245 0
time cp filesystem.squashfs xxhash_test
The time is reported by the time of the userland `cp`.
The GB/s is computed with
1,536,217,008 B / time(buffer size, hash)
which includes the time to copy from userland.
The Normalized GB/s is computed with
1,536,217,088 B / (time(buffer size, hash) - time(buffer size, none)).
| Buffer Size (B) | Hash | Time (s) | GB/s | Adjusted GB/s |
|-----------------|-------|----------|------|---------------|
| 1024 | none | 0.408 | 3.77 | - |
| 1024 | xxh32 | 0.649 | 2.37 | 6.37 |
| 1024 | xxh64 | 0.542 | 2.83 | 11.46 |
| 1024 | crc32 | 1.290 | 1.19 | 1.74 |
| 4096 | none | 0.380 | 4.04 | - |
| 4096 | xxh32 | 0.645 | 2.38 | 5.79 |
| 4096 | xxh64 | 0.500 | 3.07 | 12.80 |
| 4096 | crc32 | 1.168 | 1.32 | 1.95 |
| 8192 | none | 0.351 | 4.38 | - |
| 8192 | xxh32 | 0.614 | 2.50 | 5.84 |
| 8192 | xxh64 | 0.464 | 3.31 | 13.60 |
| 8192 | crc32 | 1.163 | 1.32 | 1.89 |
| 16384 | none | 0.346 | 4.43 | - |
| 16384 | xxh32 | 0.590 | 2.60 | 6.30 |
| 16384 | xxh64 | 0.466 | 3.30 | 12.80 |
| 16384 | crc32 | 1.183 | 1.30 | 1.84 |
Tested in userland using the test-suite in the zstd repo under
`contrib/linux-kernel/test/XXHashUserlandTest.cpp` [2] by mocking the
kernel functions. A line in each branch of every function in `xxhash.c`
was commented out to ensure that the test-suite fails. Additionally
tested while testing zstd and with SMHasher [3].
[1] https://phabricator.intern.facebook.com/P57526246
[2] https://github.com/facebook/zstd/blob/dev/contrib/linux-kernel/test/XXHashUserlandTest.cpp
[3] https://github.com/aappleby/smhasher
zstd source repository: https://github.com/facebook/zstd
XXHash source repository: https://github.com/cyan4973/xxhash
Signed-off-by: Nick Terrell <terrelln@fb.com>
Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit
5d2405227a9eaea48e8cc95756a06d407b11f141)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: I4b63e96457f17cf455591e8f35058dacd7aa9004
Matthias Kaehlcke [Thu, 10 Aug 2017 22:24:29 +0000 (15:24 -0700)]
UPSTREAM: zram: rework copy of compressor name in comp_algorithm_store()
comp_algorithm_store() passes the size of the source buffer to strlcpy()
instead of the destination buffer size. Make it explicit that the two
buffers have the same size and use strcpy() instead of strlcpy(). The
latter can be done safely since the function ensures that the string in
the source buffer is terminated.
Link: http://lkml.kernel.org/r/20170803163350.45245-1-mka@chromium.org
Signed-off-by: Matthias Kaehlcke <mka@chromium.org>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
f357e345eef7863da037e0243f2d3df4ba6df986)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: Ic9667b215ce5e0717bc6829d65e43e9b79602362
Arvind Yadav [Mon, 10 Jul 2017 22:50:15 +0000 (15:50 -0700)]
UPSTREAM: zram: constify attribute_group structures.
attribute_groups are not supposed to change at runtime. All functions
working with attribute_groups provided by <linux/sysfs.h> work with
const attribute_group. So mark the non-const structs as const.
File size before:
text data bss dec hex filename
8293 841 4 9138 23b2 drivers/block/zram/zram_drv.o
File size After adding 'const':
text data bss dec hex filename
8357 777 4 9138 23b2 drivers/block/zram/zram_drv.o
Link: http://lkml.kernel.org/r/65680c1c4d85818f7094cbfa31c91bf28185ba1b.1499061182.git.arvind.yadav.cs@gmail.com
Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
bc1bb362334ebc4c65dd4301f10fb70902b3db7d)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: Ic0765dea8c2fadb18623605ba48748a9b33df3fa
Minchan Kim [Thu, 6 Jul 2017 22:37:12 +0000 (15:37 -0700)]
UPSTREAM: zram: count same page write as page_stored
Regardless of whether it is same page or not, it's surely write and
stored to zram so we should increase pages_stored stat. Otherwise, user
can see zero value via mm_stats although he writes a lot of pages to
zram.
Link: http://lkml.kernel.org/r/1494834068-27004-1-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
51f9f82c855d65ef14c2af10e0d2c86ec332a182)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: I006d80df413a0fe0fd7dd58e535c6a2c03ab2c9d
Sangwoo Park [Wed, 3 May 2017 21:55:56 +0000 (14:55 -0700)]
UPSTREAM: zram: reduce load operation in page_same_filled
In page_same_filled function, all elements in the page is compared with
next index value. The current comparison routine compares the (i)th and
(i+1)th values of the page.
In this case, two load operaions occur for each comparison. But if we
store first value of the page stores at 'val' variable and using it to
compare with others, the load opearation is reduced. It reduce load
operation per page by up to 64times.
Link: http://lkml.kernel.org/r/1488428104-7257-1-git-send-email-sangwoo2.park@lge.com
Signed-off-by: Sangwoo Park <sangwoo2.park@lge.com>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
f0fe9984656604ea8effd5ff82709ff8ce1f954b)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: I6b58b583e83139eee9f0540da12850c43510cb8e
Minchan Kim [Wed, 3 May 2017 21:55:53 +0000 (14:55 -0700)]
UPSTREAM: zram: use zram_free_page instead of open-coded
The zram_free_page already handles NULL handle case and same page so use
it to reduce error probability. (Acutaully, I made a mistake when I
handled same page feature)
Link: http://lkml.kernel.org/r/1492052365-16169-7-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
302128dce142d780417aa548bfd7ef4dfb89fa80)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: Ie38c52dfb1959377936b7cd9158ad1b5a02219bd
Minchan Kim [Wed, 3 May 2017 21:55:50 +0000 (14:55 -0700)]
UPSTREAM: zram: introduce zram data accessor
With element, sometime I got confused handle and element access. It
might be my bad but I think it's time to introduce accessor to prevent
future idiot like me. This patch is just clean-up patch so it shouldn't
change any behavior.
Link: http://lkml.kernel.org/r/1492052365-16169-6-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
643ae61d0f41c48aa7179921fe15ba4b4d8ddfec)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: I3916d5561ab9fb2917455cac74bee431fbe84b5d
Minchan Kim [Wed, 3 May 2017 21:55:47 +0000 (14:55 -0700)]
UPSTREAM: zram: remove zram_meta structure
It's redundant now. Instead, remove it and use zram structure directly.
Link: http://lkml.kernel.org/r/1492052365-16169-5-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
beb6602cf87abee547b2692031185111f625153a)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: I720a282710b97fd75c156305fd505d4497b89e4c
Minchan Kim [Wed, 3 May 2017 21:55:44 +0000 (14:55 -0700)]
UPSTREAM: zram: use zram_slot_lock instead of raw bit_spin_lock op
With this clean-up phase, I want to use zram's wrapper function to lock
table access which is more consistent with other zram's functions.
Link: http://lkml.kernel.org/r/1492052365-16169-4-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
86c49814d449ebc51c7d455ac8e3d17b9fa702eb)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: I6afee89dce63dff6d759c78e25926814fc016107
Minchan Kim [Wed, 3 May 2017 21:55:41 +0000 (14:55 -0700)]
BACKPORT: zram: partial IO refactoring
For architecture(PAGE_SIZE > 4K), zram have supported partial IO.
However, the mixed code for handling normal/partial IO is too mess,
error-prone to modify IO handler functions with upcoming feature so this
patch aims for cleaning up zram's IO handling functions.
Link: http://lkml.kernel.org/r/1492052365-16169-3-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
1f7319c7427503abe2d365683588827b80f5714e)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: I0f023d646d17f8156130cd0507b65f2223768adf
Minchan Kim [Wed, 3 May 2017 21:55:38 +0000 (14:55 -0700)]
BACKPORT: zram: handle multiple pages attached bio's bvec
Patch series "zram clean up", v2.
This patchset aims to clean up zram .
[1] clean up multiple pages's bvec handling.
[2] clean up partial IO handling
[3-6] clean up zram via using accessor and removing pointless structure.
With [2-6] applied, we can get a few hundred bytes as well as huge
readibility enhance.
x86: 708 byte save
add/remove: 1/1 grow/shrink: 0/11 up/down: 478/-1186 (-708)
function old new delta
zram_special_page_read - 478 +478
zram_reset_device 317 314 -3
mem_used_max_store 131 128 -3
compact_store 96 93 -3
mm_stat_show 203 197 -6
zram_add 719 712 -7
zram_slot_free_notify 229 214 -15
zram_make_request 819 803 -16
zram_meta_free 128 111 -17
zram_free_page 180 151 -29
disksize_store 432 361 -71
zram_decompress_page.isra 504 - -504
zram_bvec_rw 2592 2080 -512
Total: Before=
25350773, After=
25350065, chg -0.00%
ppc64: 231 byte save
add/remove: 2/0 grow/shrink: 1/9 up/down: 681/-912 (-231)
function old new delta
zram_special_page_read - 480 +480
zram_slot_lock - 200 +200
vermagic 39 40 +1
mm_stat_show 256 248 -8
zram_meta_free 200 184 -16
zram_add 944 912 -32
zram_free_page 348 308 -40
disksize_store 572 492 -80
zram_decompress_page 664 564 -100
zram_slot_free_notify 292 160 -132
zram_make_request 1132 1000 -132
zram_bvec_rw 2768 2396 -372
Total: Before=
17565825, After=
17565594, chg -0.00%
This patch (of 6):
Johannes Thumshirn reported system goes the panic when using NVMe over
Fabrics loopback target with zram.
The reason is zram expects each bvec in bio contains a single page
but nvme can attach a huge bulk of pages attached to the bio's bvec
so that zram's index arithmetic could be wrong so that out-of-bound
access makes system panic.
[1] in mainline solved solved the problem by limiting max_sectors with
SECTORS_PER_PAGE but it makes zram slow because bio should split with
each pages so this patch makes zram aware of multiple pages in a bvec
so it could solve without any regression(ie, bio split).
[1]
0bc315381fe9, zram: set physical queue limits to avoid array out of
bounds accesses
Link: http://lkml.kernel.org/r/20170413134057.GA27499@bbox
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reported-by: Johannes Thumshirn <jthumshirn@suse.de>
Tested-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Hannes Reinecke <hare@suse.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
e86942c7b6c1e1dd5e539f3bf3cfb63799163048)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: Ibedc9e8163fc16a0c2569e8c3e33dd81bb325ee5
Minchan Kim [Thu, 13 Apr 2017 21:56:35 +0000 (14:56 -0700)]
UPSTREAM: zram: fix operator precedence to get offset
In zram_rw_page, the logic to get offset is wrong by operator precedence
(i.e., "<<" is higher than "&"). With wrong offset, zram can corrupt
the user's data. This patch fixes it.
Fixes:
8c7f01025 ("zram: implement rw_page operation of zram")
Link: http://lkml.kernel.org/r/1492042622-12074-1-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
4ca82dabc9fbf7bc5322aa54d802cb3cb7b125c5)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: I6abb2aef381463976aea1fa8e7f5ca07367190e9
zhouxianrong [Fri, 24 Feb 2017 22:59:27 +0000 (14:59 -0800)]
BACKPORT: zram: extend zero pages to same element pages
The idea is that without doing more calculations we extend zero pages to
same element pages for zram. zero page is special case of same element
page with zero element.
1. the test is done under android 7.0
2. startup too many applications circularly
3. sample the zero pages, same pages (none-zero element)
and total pages in function page_zero_filled
the result is listed as below:
ZERO SAME TOTAL
36214 17842 598196
ZERO/TOTAL SAME/TOTAL (ZERO+SAME)/TOTAL ZERO/SAME
AVERAGE 0.
060631909 0.
024990816 0.
085622726 2.
663825038
STDEV 0.
00674612 0.
005887625 0.
009707034 2.
115881328
MAX 0.
069698422 0.
030046087 0.
094975336 7.
56043956
MIN 0.
03959586 0.
007332205 0.
056055193 1.
928985507
from the above data, the benefit is about 2.5% and up to 3% of total
swapout pages.
The defect of the patch is that when we recovery a page from non-zero
element the operations are low efficient for partial read.
This patch extends zero_page to same_page so if there is any user to
have monitored zero_pages, he will be surprised if the number is
increased but it's not harmful, I believe.
[minchan@kernel.org: do not free same element pages in zram_meta_free]
Link: http://lkml.kernel.org/r/20170207065741.GA2567@bbox
Link: http://lkml.kernel.org/r/1483692145-75357-1-git-send-email-zhouxianrong@huawei.com
Link: http://lkml.kernel.org/r/1486307804-27903-1-git-send-email-minchan@kernel.org
Signed-off-by: zhouxianrong <zhouxianrong@huawei.com>
Signed-off-by: Minchan Kim <minchan@kernel.org>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
8e19d540d107ee897eb9a874844060c94e2376c0)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: I92ebb07a6ad96be82443d6e0c0d4f25cbe936915
Minchan Kim [Fri, 24 Feb 2017 22:56:47 +0000 (14:56 -0800)]
BACKPORT: zram: remove waitqueue for IO done
zram_reset_device() waits for ongoing writepage pages to be completed by
zram->refcount logic. However, it's pointless because before the reset,
we prevent further opening of zram by zram->claim and flush all of
pending IO by fsync_bdev so there should be no pending IO at the
zram_reset_device().
So let's remove that code which is even broken due to the lack of
wake_up elsewhere.
Link: http://lkml.kernel.org/r/1485145031-11661-1-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
a09759acaacf6cf738e1bc6c66d41485c87fd371)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: I97170fb576be7baae63f82334af0dd5e91b16763
Sergey Senozhatsky [Wed, 22 Feb 2017 23:46:45 +0000 (15:46 -0800)]
UPSTREAM: zram: remove obsolete sysfs attrs
We had a deprecated_attr_warn() warning for 2 years and now the time has
come and we finally can do the cleanup.
The plan was as follows:
: per-stat sysfs attributes are considered to be deprecated.
: The basic strategy is:
: -- the existing RW nodes will be downgraded to WO nodes (in linux 4.11)
: -- deprecated RO sysfs nodes will eventually be removed (in linux 4.11)
:
: The list of deprecated attributes can be found here:
: Documentation/ABI/obsolete/sysfs-block-zram
:
: Basically, every attribute that has its own read accessible sysfs
: node (e.g. num_reads) *AND* is accessible via one of the stat files
: (zram<id>/stat or zram<id>/io_stat or zram<id>/mm_stat) is considered
: to be deprecated.
The patch also removes `obsolete/sysfs-block-zram', clean ups
`testing/sysfs-block-zram' and tweaks zram.txt files.
Link: http://lkml.kernel.org/r/20170118035838.11090-1-sergey.senozhatsky@gmail.com
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
c87d1655c29500b459fb135258a93f8309ada9c7)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: Idd86259230d6a4bf0feeee53b5b69f3f3df774d4
Minchan Kim [Wed, 11 Jan 2017 00:58:21 +0000 (16:58 -0800)]
UPSTREAM: zram: support BDI_CAP_STABLE_WRITES
zram has used per-cpu stream feature from v4.7. It aims for increasing
cache hit ratio of scratch buffer for compressing. Downside of that
approach is that zram should ask memory space for compressed page in
per-cpu context which requires stricted gfp flag which could be failed.
If so, it retries to allocate memory space out of per-cpu context so it
could get memory this time and compress the data again, copies it to the
memory space.
In this scenario, zram assumes the data should never be changed but it is
not true without stable page support. So, If the data is changed under
us, zram can make buffer overrun so that zsmalloc free object chain is
broken so system goes crash like below
https://bugzilla.suse.com/show_bug.cgi?id=997574
This patch adds BDI_CAP_STABLE_WRITES to zram for declaring "I am block
device needing *stable write*".
Fixes:
da9556a2367c ("zram: user per-cpu compression streams")
Link: http://lkml.kernel.org/r/1482366980-3782-4-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Takashi Iwai <tiwai@suse.de>
Cc: Hyeoncheol Lee <cheol.lee@lge.com>
Cc: <yjay.kim@lge.com>
Cc: Sangseok Lee <sangseok.lee@lge.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Darrick J. Wong <darrick.wong@oracle.com>
Cc: <stable@vger.kernel.org> [4.7+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
b09ab054b69b07077bd3292f67e777861ac796e5)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: I3134a5882c1939792ffa71b8f31f7ab642a0e9a3
Minchan Kim [Wed, 11 Jan 2017 00:58:18 +0000 (16:58 -0800)]
UPSTREAM: zram: revalidate disk under init_lock
Commit
b4c5c60920e3 ("zram: avoid lockdep splat by revalidate_disk")
moved revalidate_disk call out of init_lock to avoid lockdep
false-positive splat. However, commit
08eee69fcf6b ("zram: remove
init_lock in zram_make_request") removed init_lock in IO path so there
is no worry about lockdep splat. So, let's restore it.
This patch is needed to set BDI_CAP_STABLE_WRITES atomically in next
patch.
Fixes:
da9556a2367c ("zram: user per-cpu compression streams")
Link: http://lkml.kernel.org/r/1482366980-3782-3-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Takashi Iwai <tiwai@suse.de>
Cc: Hyeoncheol Lee <cheol.lee@lge.com>
Cc: <yjay.kim@lge.com>
Cc: Sangseok Lee <sangseok.lee@lge.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Darrick J. Wong <darrick.wong@oracle.com>
Cc: <stable@vger.kernel.org> [4.7+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
e7ccfc4ccb703e0f033bd4617580039898e912dd)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: Iebb6f694e46a797f8ce34029857c01c0c71086c7
Minchan Kim [Wed, 11 Jan 2017 00:58:15 +0000 (16:58 -0800)]
BACKPORT: mm: support anonymous stable page
During developemnt for zram-swap asynchronous writeback, I found strange
corruption of compressed page, resulting in:
Modules linked in: zram(E)
CPU: 3 PID: 1520 Comm: zramd-1 Tainted: G E
4.8.0-mm1-00320-ge0d4894c9c38-dirty #3274
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Ubuntu-1.8.2-1ubuntu1 04/01/2014
task:
ffff88007620b840 task.stack:
ffff880078090000
RIP: set_freeobj.part.43+0x1c/0x1f
RSP: 0018:
ffff880078093ca8 EFLAGS:
00010246
RAX:
0000000000000018 RBX:
ffff880076798d88 RCX:
ffffffff81c408c8
RDX:
0000000000000018 RSI:
0000000000000000 RDI:
0000000000000246
RBP:
ffff880078093cb0 R08:
0000000000000000 R09:
0000000000000000
R10:
ffff88005bc43030 R11:
0000000000001df3 R12:
ffff880076798d88
R13:
000000000005bc43 R14:
ffff88007819d1b8 R15:
0000000000000001
FS:
0000000000000000(0000) GS:
ffff88007e380000(0000) knlGS:
0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
CR2:
00007fc934048f20 CR3:
0000000077b01000 CR4:
00000000000406e0
Call Trace:
obj_malloc+0x22b/0x260
zs_malloc+0x1e4/0x580
zram_bvec_rw+0x4cd/0x830 [zram]
page_requests_rw+0x9c/0x130 [zram]
zram_thread+0xe6/0x173 [zram]
kthread+0xca/0xe0
ret_from_fork+0x25/0x30
With investigation, it reveals currently stable page doesn't support
anonymous page. IOW, reuse_swap_page can reuse the page without waiting
writeback completion so it can overwrite page zram is compressing.
Unfortunately, zram has used per-cpu stream feature from v4.7.
It aims for increasing cache hit ratio of scratch buffer for
compressing. Downside of that approach is that zram should ask
memory space for compressed page in per-cpu context which requires
stricted gfp flag which could be failed. If so, it retries to
allocate memory space out of per-cpu context so it could get memory
this time and compress the data again, copies it to the memory space.
In this scenario, zram assumes the data should never be changed
but it is not true unless stable page supports. So, If the data is
changed under us, zram can make buffer overrun because second
compression size could be bigger than one we got in previous trial
and blindly, copy bigger size object to smaller buffer which is
buffer overrun. The overrun breaks zsmalloc free object chaining
so system goes crash like above.
I think below is same problem.
https://bugzilla.suse.com/show_bug.cgi?id=997574
Unfortunately, reuse_swap_page should be atomic so that we cannot wait on
writeback in there so the approach in this patch is simply return false if
we found it needs stable page. Although it increases memory footprint
temporarily, it happens rarely and it should be reclaimed easily althoug
it happened. Also, It would be better than waiting of IO completion,
which is critial path for application latency.
Fixes:
da9556a2367c ("zram: user per-cpu compression streams")
Link: http://lkml.kernel.org/r/20161120233015.GA14113@bbox
Link: http://lkml.kernel.org/r/1482366980-3782-2-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Acked-by: Hugh Dickins <hughd@google.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Darrick J. Wong <darrick.wong@oracle.com>
Cc: Takashi Iwai <tiwai@suse.de>
Cc: Hyeoncheol Lee <cheol.lee@lge.com>
Cc: <yjay.kim@lge.com>
Cc: Sangseok Lee <sangseok.lee@lge.com>
Cc: <stable@vger.kernel.org> [4.7+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
f05714293a591038304ddae7cb0dd747bb3786cc)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: I0fa5012aff9daf614b2d1d04f35b86ff7043ff21
Minchan Kim [Tue, 26 Jul 2016 22:23:34 +0000 (15:23 -0700)]
UPSTREAM: zram: use __GFP_MOVABLE for memory allocation
Zsmalloc is ready for page migration so zram can use __GFP_MOVABLE from
now on.
I did test to see how it helps to make higher order pages. Test
scenario is as follows.
KVM guest, 1G memory, ext4 formated zram block device,
for i in `seq 1 8`;
do
dd if=/dev/vda1 of=mnt/test$i.txt bs=128M count=1 &
done
wait `pidof dd`
for i in `seq 1 2 8`;
do
rm -rf mnt/test$i.txt
done
fstrim -v mnt
echo "init"
cat /proc/buddyinfo
echo "compaction"
echo 1 > /proc/sys/vm/compact_memory
cat /proc/buddyinfo
old:
init
Node 0, zone DMA 208 120 51 41 11 0 0 0 0 0 0
Node 0, zone DMA32 16380 13777 9184 3805 789 54 3 0 0 0 0
compaction
Node 0, zone DMA 132 82 40 39 16 2 1 0 0 0 0
Node 0, zone DMA32 5219 5526 4969 3455 1831 677 139 15 0 0 0
new:
init
Node 0, zone DMA 379 115 97 19 2 0 0 0 0 0 0
Node 0, zone DMA32 18891 16774 10862 3947 637 21 0 0 0 0 0
compaction
Node 0, zone DMA 214 66 87 29 10 3 0 0 0 0 0
Node 0, zone DMA32 1612 3139 3154 2469 1745 990 384 94 7 0 0
As you can see, compaction made so many high-order pages. Yay!
Link: http://lkml.kernel.org/r/1464736881-24886-13-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
9bc482d3460501ac809457af26b46b72cd7dc212)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: I5d7f6eaa4c2d8d3f4da30fc2bd21f4db1be95e50
Sergey Senozhatsky [Tue, 26 Jul 2016 22:22:59 +0000 (15:22 -0700)]
UPSTREAM: zram: drop gfp_t from zcomp_strm_alloc()
We now allocate streams from CPU_UP hot-plug path, there are no
context-dependent stream allocations anymore and we can schedule from
zcomp_strm_alloc(). Use GFP_KERNEL directly and drop a gfp_t parameter.
Link: http://lkml.kernel.org/r/20160531122017.2878-9-sergey.senozhatsky@gmail.com
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
16d37725a042cc66f9ee95889dd40e734264508e)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: If09c4a97f3d3e45ad578d2b1d64b26f65617774d
Sergey Senozhatsky [Tue, 26 Jul 2016 22:22:56 +0000 (15:22 -0700)]
UPSTREAM: zram: add more compression algorithms
Add "deflate", "lz4hc", "842" algorithms to the list of known
compression backends. The real availability of those algorithms,
however, depends on the corresponding CONFIG_CRYPTO_FOO config options.
[sergey.senozhatsky@gmail.com: zram-add-more-compression-algorithms-v3]
Link: http://lkml.kernel.org/r/20160604024902.11778-7-sergey.senozhatsky@gmail.com
Link: http://lkml.kernel.org/r/20160531122017.2878-8-sergey.senozhatsky@gmail.com
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
eb9f56d82547db407779967a2251ea28969245b0)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: Ie46c7676363ef13c559b45dab4968e2cc48a6cbe
Sergey Senozhatsky [Tue, 26 Jul 2016 22:22:54 +0000 (15:22 -0700)]
UPSTREAM: zram: delete custom lzo/lz4
Remove lzo/lz4 backends, we use crypto API now.
[sergey.senozhatsky@gmail.com: zram-delete-custom-lzo-lz4-v3]
Link: http://lkml.kernel.org/r/20160604024902.11778-6-sergey.senozhatsky@gmail.com
Link: http://lkml.kernel.org/r/20160531122017.2878-7-sergey.senozhatsky@gmail.com
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
ce1ed9f98e888aa220fb09da2e2bcfcfba218a27)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: Ic2aa300a1a66b61740da73833dab252dc0d4b74a
Sergey Senozhatsky [Tue, 26 Jul 2016 22:22:51 +0000 (15:22 -0700)]
UPSTREAM: zram: cosmetic: cleanup documentation
zram documentation is a mix of different styles: spaces, tabs, tabs +
spaces, etc. Clean it up.
Link: http://lkml.kernel.org/r/20160531122017.2878-6-sergey.senozhatsky@gmail.com
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
69a30a8d2ac17c8080cf6ebfc91149fd6c2648b3)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: Ib71a1933e3da12b3a9f29b805a458cdc9815c36b
Sergey Senozhatsky [Tue, 26 Jul 2016 22:22:48 +0000 (15:22 -0700)]
UPSTREAM: zram: use crypto api to check alg availability
There is no way to get a string with all the crypto comp algorithms
supported by the crypto comp engine, so we need to maintain our own
backends list. At the same time we additionally need to use
crypto_has_comp() to make sure that the user has requested a compression
algorithm that is recognized by the crypto comp engine. Relying on
/proc/crypto is not an options here, because it does not show
not-yet-inserted compression modules.
Example:
modprobe zram
cat /proc/crypto | grep -i lz4
modprobe lz4
cat /proc/crypto | grep -i lz4
name : lz4
driver : lz4-generic
module : lz4
So the user can't tell exactly if the lz4 is really supported from
/proc/crypto output, unless someone or something has loaded it.
This patch also adds crypto_has_comp() to zcomp_available_show(). We
store all the compression algorithms names in zcomp's `backends' array,
regardless the CONFIG_CRYPTO_FOO configuration, but show only those that
are also supported by crypto engine. This helps user to know the exact
list of compression algorithms that can be used.
Example:
module lz4 is not loaded yet, but is supported by the crypto
engine. /proc/crypto has no information on this module, while
zram's `comp_algorithm' lists it:
cat /proc/crypto | grep -i lz4
cat /sys/block/zram0/comp_algorithm
[lzo] lz4 deflate lz4hc 842
We still use the `backends' array to determine if the requested
compression backend is known to crypto api. This array, however, may not
contain some entries, therefore as the last step we call crypto_has_comp()
function which attempts to insmod the requested compression algorithm to
determine if crypto api supports it. The advantage of this method is that
now we permit the usage of out-of-tree crypto compression modules
(implementing S/W or H/W compression).
[sergey.senozhatsky@gmail.com: zram-use-crypto-api-to-check-alg-availability-v3]
Link: http://lkml.kernel.org/r/20160604024902.11778-4-sergey.senozhatsky@gmail.com
Link: http://lkml.kernel.org/r/20160531122017.2878-5-sergey.senozhatsky@gmail.com
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
415403be37e204632b17bdb6857890fe5a220cea)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: I7c823238329bd6e5180386507d16123228804cc5
Sergey Senozhatsky [Tue, 26 Jul 2016 22:22:45 +0000 (15:22 -0700)]
BACKPORT: zram: switch to crypto compress API
We don't have an idle zstreams list anymore and our write path now works
absolutely differently, preventing preemption during compression. This
removes possibilities of read paths preempting writes at wrong places
(which could badly affect the performance of both paths) and at the same
time opens the door for a move from custom LZO/LZ4 compression backends
implementation to a more generic one, using crypto compress API.
Joonsoo Kim [1] attempted to do this a while ago, but faced with the
need of introducing a new crypto API interface. The root cause was the
fact that crypto API compression algorithms require a compression stream
structure (in zram terminology) for both compression and decompression
ops, while in reality only several of compression algorithms really need
it. This resulted in a concept of context-less crypto API compression
backends [2]. Both write and read paths, though, would have been
executed with the preemption enabled, which in the worst case could have
resulted in a decreased worst-case performance, e.g. consider the
following case:
CPU0
zram_write()
spin_lock()
take the last idle stream
spin_unlock()
<< preempted >>
zram_read()
spin_lock()
no idle streams
spin_unlock()
schedule()
resuming zram_write compression()
but it took me some time to realize that, and it took even longer to
evolve zram and to make it ready for crypto API. The key turned out to be
-- drop the idle streams list entirely. Without the idle streams list we
are free to use compression algorithms that require compression stream for
decompression (read), because streams are now placed in per-cpu data and
each write path has to disable preemption for compression op, almost
completely eliminating the aforementioned case (technically, we still have
a small chance, because write path has a fast and a slow paths and the
slow path is executed with the preemption enabled; but the frequency of
failed fast path is too low).
TEST
====
- 4 CPUs, x86_64 system
- 3G zram, lzo
- fio tests: read, randread, write, randwrite, rw, randrw
test script [3] command:
ZRAM_SIZE=3G LOG_SUFFIX=XXXX FIO_LOOPS=5 ./zram-fio-test.sh
BASE PATCHED
jobs1
READ: 2527.2MB/s 2482.7MB/s
READ: 2102.7MB/s 2045.0MB/s
WRITE: 1284.3MB/s 1324.3MB/s
WRITE: 1080.7MB/s 1101.9MB/s
READ: 430125KB/s 437498KB/s
WRITE: 430538KB/s 437919KB/s
READ: 399593KB/s 403987KB/s
WRITE: 399910KB/s 404308KB/s
jobs2
READ: 8133.5MB/s 7854.8MB/s
READ: 7086.6MB/s 6912.8MB/s
WRITE: 3177.2MB/s 3298.3MB/s
WRITE: 2810.2MB/s 2871.4MB/s
READ: 1017.6MB/s 1023.4MB/s
WRITE: 1018.2MB/s 1023.1MB/s
READ: 977836KB/s 984205KB/s
WRITE: 979435KB/s 985814KB/s
jobs3
READ: 13557MB/s 13391MB/s
READ: 11876MB/s 11752MB/s
WRITE: 4641.5MB/s 4682.1MB/s
WRITE: 4164.9MB/s 4179.3MB/s
READ: 1453.8MB/s 1455.1MB/s
WRITE: 1455.1MB/s 1458.2MB/s
READ: 1387.7MB/s 1395.7MB/s
WRITE: 1386.1MB/s 1394.9MB/s
jobs4
READ: 20271MB/s 20078MB/s
READ: 18033MB/s 17928MB/s
WRITE: 6176.8MB/s 6180.5MB/s
WRITE: 5686.3MB/s 5705.3MB/s
READ: 2009.4MB/s 2006.7MB/s
WRITE: 2007.5MB/s 2004.9MB/s
READ: 1929.7MB/s 1935.6MB/s
WRITE: 1926.8MB/s 1932.6MB/s
jobs5
READ: 18823MB/s 19024MB/s
READ: 18968MB/s 19071MB/s
WRITE: 6191.6MB/s 6372.1MB/s
WRITE: 5818.7MB/s 5787.1MB/s
READ: 2011.7MB/s 1981.3MB/s
WRITE: 2011.4MB/s 1980.1MB/s
READ: 1949.3MB/s 1935.7MB/s
WRITE: 1940.4MB/s 1926.1MB/s
jobs6
READ: 21870MB/s 21715MB/s
READ: 19957MB/s 19879MB/s
WRITE: 6528.4MB/s 6537.6MB/s
WRITE: 6098.9MB/s 6073.6MB/s
READ: 2048.6MB/s 2049.9MB/s
WRITE: 2041.7MB/s 2042.9MB/s
READ: 2013.4MB/s 1990.4MB/s
WRITE: 2009.4MB/s 1986.5MB/s
jobs7
READ: 21359MB/s 21124MB/s
READ: 19746MB/s 19293MB/s
WRITE: 6660.4MB/s 6518.8MB/s
WRITE: 6211.6MB/s 6193.1MB/s
READ: 2089.7MB/s 2080.6MB/s
WRITE: 2085.8MB/s 2076.5MB/s
READ: 2041.2MB/s 2052.5MB/s
WRITE: 2037.5MB/s 2048.8MB/s
jobs8
READ: 20477MB/s 19974MB/s
READ: 18922MB/s 18576MB/s
WRITE: 6851.9MB/s 6788.3MB/s
WRITE: 6407.7MB/s 6347.5MB/s
READ: 2134.8MB/s 2136.1MB/s
WRITE: 2132.8MB/s 2134.4MB/s
READ: 2074.2MB/s 2069.6MB/s
WRITE: 2087.3MB/s 2082.4MB/s
jobs9
READ: 19797MB/s 19994MB/s
READ: 18806MB/s 18581MB/s
WRITE: 6878.7MB/s 6822.7MB/s
WRITE: 6456.8MB/s 6447.2MB/s
READ: 2141.1MB/s 2154.7MB/s
WRITE: 2144.4MB/s 2157.3MB/s
READ: 2084.1MB/s 2085.1MB/s
WRITE: 2091.5MB/s 2092.5MB/s
jobs10
READ: 19794MB/s 19784MB/s
READ: 18794MB/s 18745MB/s
WRITE: 6984.4MB/s 6676.3MB/s
WRITE: 6532.3MB/s 6342.7MB/s
READ: 2150.6MB/s 2155.4MB/s
WRITE: 2156.8MB/s 2161.5MB/s
READ: 2106.4MB/s 2095.6MB/s
WRITE: 2109.7MB/s 2098.4MB/s
BASE PATCHED
jobs1 perfstat
stalled-cycles-frontend 102,480,595,419 ( 41.53%) 114,508,864,804 ( 46.92%)
stalled-cycles-backend 51,941,417,832 ( 21.05%) 46,836,112,388 ( 19.19%)
instructions 283,612,054,215 ( 1.15) 283,918,134,959 ( 1.16)
branches 56,372,560,385 ( 724.923) 56,449,814,753 ( 733.766)
branch-misses 374,826,000 ( 0.66%) 326,935,859 ( 0.58%)
jobs2 perfstat
stalled-cycles-frontend 155,142,745,777 ( 40.99%) 164,170,979,198 ( 43.82%)
stalled-cycles-backend 70,813,866,387 ( 18.71%) 66,456,858,165 ( 17.74%)
instructions 463,436,648,173 ( 1.22) 464,221,890,191 ( 1.24)
branches 91,088,733,902 ( 760.088) 91,278,144,546 ( 769.133)
branch-misses 504,460,363 ( 0.55%) 394,033,842 ( 0.43%)
jobs3 perfstat
stalled-cycles-frontend 201,300,397,212 ( 39.84%) 223,969,902,257 ( 44.44%)
stalled-cycles-backend 87,712,593,974 ( 17.36%) 81,618,888,712 ( 16.19%)
instructions 642,869,545,023 ( 1.27) 644,677,354,132 ( 1.28)
branches 125,724,560,594 ( 690.682) 126,133,159,521 ( 694.542)
branch-misses 527,941,798 ( 0.42%) 444,782,220 ( 0.35%)
jobs4 perfstat
stalled-cycles-frontend 246,701,197,429 ( 38.12%) 280,076,030,886 ( 43.29%)
stalled-cycles-backend 119,050,341,112 ( 18.40%) 110,955,641,671 ( 17.15%)
instructions 822,716,962,127 ( 1.27) 825,536,969,320 ( 1.28)
branches 160,590,028,545 ( 688.614) 161,152,996,915 ( 691.068)
branch-misses 650,295,287 ( 0.40%) 550,229,113 ( 0.34%)
jobs5 perfstat
stalled-cycles-frontend 298,958,462,516 ( 38.30%) 344,852,200,358 ( 44.16%)
stalled-cycles-backend 137,558,742,122 ( 17.62%) 129,465,067,102 ( 16.58%)
instructions 1,005,714,688,752 ( 1.29) 1,007,657,999,432 ( 1.29)
branches 195,988,773,962 ( 697.730) 196,446,873,984 ( 700.319)
branch-misses 695,818,940 ( 0.36%) 624,823,263 ( 0.32%)
jobs6 perfstat
stalled-cycles-frontend 334,497,602,856 ( 36.71%) 387,590,419,779 ( 42.38%)
stalled-cycles-backend 163,539,365,335 ( 17.95%) 152,640,193,639 ( 16.69%)
instructions 1,184,738,177,851 ( 1.30) 1,187,396,281,677 ( 1.30)
branches 230,592,915,640 ( 702.902) 231,253,802,882 ( 702.356)
branch-misses 747,934,786 ( 0.32%) 643,902,424 ( 0.28%)
jobs7 perfstat
stalled-cycles-frontend 396,724,684,187 ( 37.71%) 460,705,858,952 ( 43.84%)
stalled-cycles-backend 188,096,616,496 ( 17.88%) 175,785,787,036 ( 16.73%)
instructions 1,364,041,136,608 ( 1.30) 1,366,689,075,112 ( 1.30)
branches 265,253,096,936 ( 700.078) 265,890,524,883 ( 702.839)
branch-misses 784,991,589 ( 0.30%) 729,196,689 ( 0.27%)
jobs8 perfstat
stalled-cycles-frontend 440,248,299,870 ( 36.92%) 509,554,793,816 ( 42.46%)
stalled-cycles-backend 222,575,930,616 ( 18.67%) 213,401,248,432 ( 17.78%)
instructions 1,542,262,045,114 ( 1.29) 1,545,233,932,257 ( 1.29)
branches 299,775,178,439 ( 697.666) 300,528,458,505 ( 694.769)
branch-misses 847,496,084 ( 0.28%) 748,794,308 ( 0.25%)
jobs9 perfstat
stalled-cycles-frontend 506,269,882,480 ( 37.86%) 592,798,032,820 ( 44.43%)
stalled-cycles-backend 253,192,498,861 ( 18.93%) 233,727,666,185 ( 17.52%)
instructions 1,721,985,080,913 ( 1.29) 1,724,666,236,005 ( 1.29)
branches 334,517,360,255 ( 694.134) 335,199,758,164 ( 697.131)
branch-misses 873,496,730 ( 0.26%) 815,379,236 ( 0.24%)
jobs10 perfstat
stalled-cycles-frontend 549,063,363,749 ( 37.18%) 651,302,376,662 ( 43.61%)
stalled-cycles-backend 281,680,986,810 ( 19.07%) 277,005,235,582 ( 18.55%)
instructions 1,901,859,271,180 ( 1.29) 1,906,311,064,230 ( 1.28)
branches 369,398,536,153 ( 694.004) 370,527,696,358 ( 688.409)
branch-misses 967,929,335 ( 0.26%) 890,125,056 ( 0.24%)
BASE PATCHED
seconds elapsed 79.
421641008 78.
735285546
seconds elapsed 61.
471246133 60.
869085949
seconds elapsed 62.
317058173 62.
224188495
seconds elapsed 60.
030739363 60.
081102518
seconds elapsed 74.
070398362 74.
317582865
seconds elapsed 84.
985953007 85.
414364176
seconds elapsed 97.
724553255 98.
173311344
seconds elapsed 109.
488066758 110.
268399318
seconds elapsed 122.
768189405 122.
967164498
seconds elapsed 135.
130035105 136.
934770801
On my other system (8 x86_64 CPUs, short version of test results):
BASE PATCHED
seconds elapsed 19.
518065994 19.
806320662
seconds elapsed 15.
172772749 15.
594718291
seconds elapsed 13.
820925970 13.
821708564
seconds elapsed 13.
293097816 14.
585206405
seconds elapsed 16.
207284118 16.
064431606
seconds elapsed 17.
958376158 17.
771825767
seconds elapsed 19.
478009164 19.
602961508
seconds elapsed 21.
347152811 21.
352318709
seconds elapsed 24.
478121126 24.
171088735
seconds elapsed 26.
865057442 26.
767327618
So performance-wise the numbers are quite similar.
Also update zcomp interface to be more aligned with the crypto API.
[1] http://marc.info/?l=linux-kernel&m=
144480832108927&w=2
[2] http://marc.info/?l=linux-kernel&m=
145379613507518&w=2
[3] https://github.com/sergey-senozhatsky/zram-perf-test
Link: http://lkml.kernel.org/r/20160531122017.2878-3-sergey.senozhatsky@gmail.com
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Suggested-by: Minchan Kim <minchan@kernel.org>
Suggested-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
ebaf9ab56d9d5f350969bd1ea8f47234623c9684)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: Ia0c362b7419de59e6c6ea81c37f99ef1d22c2b4b
Sergey Senozhatsky [Tue, 26 Jul 2016 22:22:42 +0000 (15:22 -0700)]
UPSTREAM: zram: rename zstrm find-release functions
This has started as a 'add zlib support' work, but after some thinking I
saw no blockers for a bigger change -- a switch to crypto API.
We don't have an idle zstreams list anymore and our write path now works
absolutely differently, preventing preemption during compression. This
removes possibilities of read paths preempting writes at wrong places
and opens the door for a move from custom LZO/LZ4 compression backends
implementation to a more generic one, using crypto compress API.
This patch set also eliminates the need of a new context-less crypto API
interface, which was quite hard to sell, so we can move along faster.
benchmarks:
(x86_64, 4GB, zram-perf script)
perf reported run-time fio (max jobs=3). I performed fio test with the
increasing number of parallel jobs (max to 3) on a 3G zram device, using
`static' data and the following crypto comp algorithms:
842, deflate, lz4, lz4hc, lzo
the output was:
- test running time (which can tell us what algorithms performs faster)
and
- zram mm_stat (which tells the compressed memory size, max used memory, etc).
It's just for information. for example, LZ4HC has twice the running
time of LZO, but the compressed memory size is:
23592960 vs
34603008
bytes.
test-fio-zram-842
197.
907655282 seconds time elapsed
201.
623142884 seconds time elapsed
226.
854291345 seconds time elapsed
test-fio-zram-DEFLATE
253.
259516155 seconds time elapsed
258.
148563401 seconds time elapsed
290.
251909365 seconds time elapsed
test-fio-zram-LZ4
27.
022598717 seconds time elapsed
29.
580522717 seconds time elapsed
33.
293463430 seconds time elapsed
test-fio-zram-LZ4HC
56.
393954615 seconds time elapsed
74.
904659747 seconds time elapsed
101.
940998564 seconds time elapsed
test-fio-zram-LZO
28.
155948075 seconds time elapsed
30.
390036330 seconds time elapsed
34.
455773159 seconds time elapsed
zram mm_stat-s (max fio jobs=3)
test-fio-zram-842
mm_stat (jobs1):
3221225472 673185792 690266112 0
690266112 0 0
mm_stat (jobs2):
3221225472 673185792 690266112 0
690266112 0 0
mm_stat (jobs3):
3221225472 673185792 690266112 0
690266112 0 0
test-fio-zram-DEFLATE
mm_stat (jobs1):
3221225472 24379392 37761024 0
37761024 0 0
mm_stat (jobs2):
3221225472 24379392 37761024 0
37761024 0 0
mm_stat (jobs3):
3221225472 24379392 37761024 0
37761024 0 0
test-fio-zram-LZ4
mm_stat (jobs1):
3221225472 23592960 37761024 0
37761024 0 0
mm_stat (jobs2):
3221225472 23592960 37761024 0
37761024 0 0
mm_stat (jobs3):
3221225472 23592960 37761024 0
37761024 0 0
test-fio-zram-LZ4HC
mm_stat (jobs1):
3221225472 23592960 37761024 0
37761024 0 0
mm_stat (jobs2):
3221225472 23592960 37761024 0
37761024 0 0
mm_stat (jobs3):
3221225472 23592960 37761024 0
37761024 0 0
test-fio-zram-LZO
mm_stat (jobs1):
3221225472 34603008 50335744 0
50335744 0 0
mm_stat (jobs2):
3221225472 34603008 50335744 0
50335744 0 0
mm_stat (jobs3):
3221225472 34603008 50335744 0
50339840 0 0
This patch (of 8):
We don't perform any zstream idle list lookup anymore, so
zcomp_strm_find()/zcomp_strm_release() names are not representative.
Rename to zcomp_stream_get()/zcomp_stream_put().
Link: http://lkml.kernel.org/r/20160531122017.2878-2-sergey.senozhatsky@gmail.com
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
2aea8493d326bdf15446768333e1d2c91b040b5c)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: I2f4c9e215bca73ba5adb1354aaec6e32e420920d
Sergey Senozhatsky [Sat, 21 May 2016 00:00:02 +0000 (17:00 -0700)]
UPSTREAM: zram: introduce per-device debug_stat sysfs node
debug_stat sysfs is read-only and represents various debugging data that
zram developers may need. This file is not meant to be used by anyone
else: its content is not documented and will change any time w/o any
notice. Therefore, the output of debug_stat file contains a version
string. To avoid any confusion, we will increase the version number
every time we modify the output.
At the moment this file exports only one value -- the number of
re-compressions, IOW, the number of times compression fast path has
failed. This stat is temporary any will be useful in case if any
per-cpu compression streams regressions will be reported.
Link: http://lkml.kernel.org/r/20160513230834.GB26763@bbox
Link: http://lkml.kernel.org/r/20160511134553.12655-1-sergey.senozhatsky@gmail.com
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
623e47fc64f8de480b322b7ed68855f97137e2a5)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: Ie0ef61db7aa0b2c713de1d8bf48e8a545b4276e9
Sergey Senozhatsky [Fri, 20 May 2016 23:59:59 +0000 (16:59 -0700)]
UPSTREAM: zram: remove max_comp_streams internals
Remove the internal part of max_comp_streams interface, since we
switched to per-cpu streams. We will keep RW max_comp_streams attr
around, because:
a) we may (silently) switch back to idle compression streams list and
don't want to disturb user space
b) max_comp_streams attr must wait for the next 'lay off cycle'; we
give user space 2 years to adjust before we remove/downgrade the attr,
and there are already several attrs scheduled for removal in 4.11, so
it's too late for max_comp_streams.
This slightly change a user visible behaviour:
- First, reading from max_comp_stream file now will always return the
number of online CPUs.
- Second, writing to max_comp_stream will not take any effect.
Link: http://lkml.kernel.org/r/20160503165546.25201-1-sergey.senozhatsky@gmail.com
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Minchan Kim <minchan@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
43209ea2d17aae1540d4e28274e36404f72702f2)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: I1902e741b4d3b83c5bd0d66bf1bae021dbfe2056
Sergey Senozhatsky [Fri, 20 May 2016 23:59:51 +0000 (16:59 -0700)]
UPSTREAM: zram: user per-cpu compression streams
Remove idle streams list and keep compression streams in per-cpu data.
This removes two contented spin_lock()/spin_unlock() calls from write
path and also prevent write OP from being preempted while holding the
compression stream, which can cause slow downs.
For instance, let's assume that we have N cpus and N-2
max_comp_streams.TASK1 owns the last idle stream, TASK2-TASK3 come in
with the write requests:
TASK1 TASK2 TASK3
zram_bvec_write()
spin_lock
find stream
spin_unlock
compress
<<preempted>> zram_bvec_write()
spin_lock
find stream
spin_unlock
no_stream
schedule
zram_bvec_write()
spin_lock
find_stream
spin_unlock
no_stream
schedule
spin_lock
release stream
spin_unlock
wake up TASK2
not only TASK2 and TASK3 will not get the stream, TASK1 will be
preempted in the middle of its operation; while we would prefer it to
finish compression and release the stream.
Test environment: x86_64, 4 CPU box, 3G zram, lzo
The following fio tests were executed:
read, randread, write, randwrite, rw, randrw
with the increasing number of jobs from 1 to 10.
4 streams 8 streams per-cpu
===========================================================
jobs1
READ: 2520.1MB/s 2566.5MB/s 2491.5MB/s
READ: 2102.7MB/s 2104.2MB/s 2091.3MB/s
WRITE: 1355.1MB/s 1320.2MB/s 1378.9MB/s
WRITE: 1103.5MB/s 1097.2MB/s 1122.5MB/s
READ: 434013KB/s 435153KB/s 439961KB/s
WRITE: 433969KB/s 435109KB/s 439917KB/s
READ: 403166KB/s 405139KB/s 403373KB/s
WRITE: 403223KB/s 405197KB/s 403430KB/s
jobs2
READ: 7958.6MB/s 8105.6MB/s 8073.7MB/s
READ: 6864.9MB/s 6989.8MB/s 7021.8MB/s
WRITE: 2438.1MB/s 2346.9MB/s 3400.2MB/s
WRITE: 1994.2MB/s 1990.3MB/s 2941.2MB/s
READ: 981504KB/s 973906KB/s 1018.8MB/s
WRITE: 981659KB/s 974060KB/s 1018.1MB/s
READ: 937021KB/s 938976KB/s 987250KB/s
WRITE: 934878KB/s 936830KB/s 984993KB/s
jobs3
READ: 13280MB/s 13553MB/s 13553MB/s
READ: 11534MB/s 11785MB/s 11755MB/s
WRITE: 3456.9MB/s 3469.9MB/s 4810.3MB/s
WRITE: 3029.6MB/s 3031.6MB/s 4264.8MB/s
READ: 1363.8MB/s 1362.6MB/s 1448.9MB/s
WRITE: 1361.9MB/s 1360.7MB/s 1446.9MB/s
READ: 1309.4MB/s 1310.6MB/s 1397.5MB/s
WRITE: 1307.4MB/s 1308.5MB/s 1395.3MB/s
jobs4
READ: 20244MB/s 20177MB/s 20344MB/s
READ: 17886MB/s 17913MB/s 17835MB/s
WRITE: 4071.6MB/s 4046.1MB/s 6370.2MB/s
WRITE: 3608.9MB/s 3576.3MB/s 5785.4MB/s
READ: 1824.3MB/s 1821.6MB/s 1997.5MB/s
WRITE: 1819.8MB/s 1817.4MB/s 1992.5MB/s
READ: 1765.7MB/s 1768.3MB/s 1937.3MB/s
WRITE: 1767.5MB/s 1769.1MB/s 1939.2MB/s
jobs5
READ: 18663MB/s 18986MB/s 18823MB/s
READ: 16659MB/s 16605MB/s 16954MB/s
WRITE: 3912.4MB/s 3888.7MB/s 6126.9MB/s
WRITE: 3506.4MB/s 3442.5MB/s 5519.3MB/s
READ: 1798.2MB/s 1746.5MB/s 1935.8MB/s
WRITE: 1792.7MB/s 1740.7MB/s 1929.1MB/s
READ: 1727.6MB/s 1658.2MB/s 1917.3MB/s
WRITE: 1726.5MB/s 1657.2MB/s 1916.6MB/s
jobs6
READ: 21017MB/s 20922MB/s 21162MB/s
READ: 19022MB/s 19140MB/s 18770MB/s
WRITE: 3968.2MB/s 4037.7MB/s 6620.8MB/s
WRITE: 3643.5MB/s 3590.2MB/s 6027.5MB/s
READ: 1871.8MB/s 1880.5MB/s 2049.9MB/s
WRITE: 1867.8MB/s 1877.2MB/s 2046.2MB/s
READ: 1755.8MB/s 1710.3MB/s 1964.7MB/s
WRITE: 1750.5MB/s 1705.9MB/s 1958.8MB/s
jobs7
READ: 21103MB/s 20677MB/s 21482MB/s
READ: 18522MB/s 18379MB/s 19443MB/s
WRITE: 4022.5MB/s 4067.4MB/s 6755.9MB/s
WRITE: 3691.7MB/s 3695.5MB/s 5925.6MB/s
READ: 1841.5MB/s 1933.9MB/s 2090.5MB/s
WRITE: 1842.7MB/s 1935.3MB/s 2091.9MB/s
READ: 1832.4MB/s 1856.4MB/s 1971.5MB/s
WRITE: 1822.3MB/s 1846.2MB/s 1960.6MB/s
jobs8
READ: 20463MB/s 20194MB/s 20862MB/s
READ: 18178MB/s 17978MB/s 18299MB/s
WRITE: 4085.9MB/s 4060.2MB/s 7023.8MB/s
WRITE: 3776.3MB/s 3737.9MB/s 6278.2MB/s
READ: 1957.6MB/s 1944.4MB/s 2109.5MB/s
WRITE: 1959.2MB/s 1946.2MB/s 2111.4MB/s
READ: 1900.6MB/s 1885.7MB/s 2082.1MB/s
WRITE: 1896.2MB/s 1881.4MB/s 2078.3MB/s
jobs9
READ: 19692MB/s 19734MB/s 19334MB/s
READ: 17678MB/s 18249MB/s 17666MB/s
WRITE: 4004.7MB/s 4064.8MB/s 6990.7MB/s
WRITE: 3724.7MB/s 3772.1MB/s 6193.6MB/s
READ: 1953.7MB/s 1967.3MB/s 2105.6MB/s
WRITE: 1953.4MB/s 1966.7MB/s 2104.1MB/s
READ: 1860.4MB/s 1897.4MB/s 2068.5MB/s
WRITE: 1858.9MB/s 1895.9MB/s 2066.8MB/s
jobs10
READ: 19730MB/s 19579MB/s 19492MB/s
READ: 18028MB/s 18018MB/s 18221MB/s
WRITE: 4027.3MB/s 4090.6MB/s 7020.1MB/s
WRITE: 3810.5MB/s 3846.8MB/s 6426.8MB/s
READ: 1956.1MB/s 1994.6MB/s 2145.2MB/s
WRITE: 1955.9MB/s 1993.5MB/s 2144.8MB/s
READ: 1852.8MB/s 1911.6MB/s 2075.8MB/s
WRITE: 1855.7MB/s 1914.6MB/s 2078.1MB/s
perf stat
4 streams 8 streams per-cpu
====================================================================================================================
jobs1
stalled-cycles-frontend 23,174,811,209 ( 38.21%) 23,220,254,188 ( 38.25%) 23,061,406,918 ( 38.34%)
stalled-cycles-backend 11,514,174,638 ( 18.98%) 11,696,722,657 ( 19.27%) 11,370,852,810 ( 18.90%)
instructions 73,925,005,782 ( 1.22) 73,903,177,632 ( 1.22) 73,507,201,037 ( 1.22)
branches 14,455,124,835 ( 756.063) 14,455,184,779 ( 755.281) 14,378,599,509 ( 758.546)
branch-misses 69,801,336 ( 0.48%) 80,225,529 ( 0.55%) 72,044,726 ( 0.50%)
jobs2
stalled-cycles-frontend 49,912,741,782 ( 46.11%) 50,101,189,290 ( 45.95%) 32,874,195,633 ( 35.11%)
stalled-cycles-backend 27,080,366,230 ( 25.02%) 27,949,970,232 ( 25.63%) 16,461,222,706 ( 17.58%)
instructions 122,831,629,690 ( 1.13) 122,919,846,419 ( 1.13) 121,924,786,775 ( 1.30)
branches 23,725,889,239 ( 692.663) 23,733,547,140 ( 688.062) 23,553,950,311 ( 794.794)
branch-misses 90,733,041 ( 0.38%) 96,320,895 ( 0.41%) 84,561,092 ( 0.36%)
jobs3
stalled-cycles-frontend 66,437,834,608 ( 45.58%) 63,534,923,344 ( 43.69%) 42,101,478,505 ( 33.19%)
stalled-cycles-backend 34,940,799,661 ( 23.97%) 34,774,043,148 ( 23.91%) 21,163,324,388 ( 16.68%)
instructions 171,692,121,862 ( 1.18) 171,775,373,044 ( 1.18) 170,353,542,261 ( 1.34)
branches 32,968,962,622 ( 628.723) 32,987,739,894 ( 630.512) 32,729,463,918 ( 717.027)
branch-misses 111,522,732 ( 0.34%) 110,472,894 ( 0.33%) 99,791,291 ( 0.30%)
jobs4
stalled-cycles-frontend 98,741,701,675 ( 49.72%) 94,797,349,965 ( 47.59%) 54,535,655,381 ( 33.53%)
stalled-cycles-backend 54,642,609,615 ( 27.51%) 55,233,554,408 ( 27.73%) 27,882,323,541 ( 17.14%)
instructions 220,884,807,851 ( 1.11) 220,930,887,273 ( 1.11) 218,926,845,851 ( 1.35)
branches 42,354,518,180 ( 592.105) 42,362,770,587 ( 590.452) 41,955,552,870 ( 716.154)
branch-misses 138,093,449 ( 0.33%) 131,295,286 ( 0.31%) 121,794,771 ( 0.29%)
jobs5
stalled-cycles-frontend 116,219,747,212 ( 48.14%) 110,310,397,012 ( 46.29%) 66,373,082,723 ( 33.70%)
stalled-cycles-backend 66,325,434,776 ( 27.48%) 64,157,087,914 ( 26.92%) 32,999,097,299 ( 16.76%)
instructions 270,615,008,466 ( 1.12) 270,546,409,525 ( 1.14) 268,439,910,948 ( 1.36)
branches 51,834,046,557 ( 599.108) 51,811,867,722 ( 608.883) 51,412,576,077 ( 729.213)
branch-misses 158,197,086 ( 0.31%) 142,639,805 ( 0.28%) 133,425,455 ( 0.26%)
jobs6
stalled-cycles-frontend 138,009,414,492 ( 48.23%) 139,063,571,254 ( 48.80%) 75,278,568,278 ( 32.80%)
stalled-cycles-backend 79,211,949,650 ( 27.68%) 79,077,241,028 ( 27.75%) 37,735,797,899 ( 16.44%)
instructions 319,763,993,731 ( 1.12) 319,937,782,834 ( 1.12) 316,663,600,784 ( 1.38)
branches 61,219,433,294 ( 595.056) 61,250,355,540 ( 598.215) 60,523,446,617 ( 733.706)
branch-misses 169,257,123 ( 0.28%) 154,898,028 ( 0.25%) 141,180,587 ( 0.23%)
jobs7
stalled-cycles-frontend 162,974,812,119 ( 49.20%) 159,290,061,987 ( 48.43%) 88,046,641,169 ( 33.21%)
stalled-cycles-backend 92,223,151,661 ( 27.84%) 91,667,904,406 ( 27.87%) 44,068,454,971 ( 16.62%)
instructions 369,516,432,430 ( 1.12) 369,361,799,063 ( 1.12) 365,290,380,661 ( 1.38)
branches 70,795,673,950 ( 594.220) 70,743,136,124 ( 597.876) 69,803,996,038 ( 732.822)
branch-misses 181,708,327 ( 0.26%) 165,767,821 ( 0.23%) 150,109,797 ( 0.22%)
jobs8
stalled-cycles-frontend 185,000,017,027 ( 49.30%) 182,334,345,473 ( 48.37%) 99,980,147,041 ( 33.26%)
stalled-cycles-backend 105,753,516,186 ( 28.18%) 107,937,830,322 ( 28.63%) 51,404,177,181 ( 17.10%)
instructions 418,153,161,055 ( 1.11) 418,308,565,828 ( 1.11) 413,653,475,581 ( 1.38)
branches 80,035,882,398 ( 592.296) 80,063,204,510 ( 589.843) 79,024,105,589 ( 730.530)
branch-misses 199,764,528 ( 0.25%) 177,936,926 ( 0.22%) 160,525,449 ( 0.20%)
jobs9
stalled-cycles-frontend 210,941,799,094 ( 49.63%) 204,714,679,254 ( 48.55%) 114,251,113,756 ( 33.96%)
stalled-cycles-backend 122,640,849,067 ( 28.85%) 122,188,553,256 ( 28.98%) 58,360,041,127 ( 17.35%)
instructions 468,151,025,415 ( 1.10) 467,354,869,323 ( 1.11) 462,665,165,216 ( 1.38)
branches 89,657,067,510 ( 585.628) 89,411,550,407 ( 588.990) 88,360,523,943 ( 730.151)
branch-misses 218,292,301 ( 0.24%) 191,701,247 ( 0.21%) 178,535,678 ( 0.20%)
jobs10
stalled-cycles-frontend 233,595,958,008 ( 49.81%) 227,540,615,689 ( 49.11%) 160,341,979,938 ( 43.07%)
stalled-cycles-backend 136,153,676,021 ( 29.03%) 133,635,240,742 ( 28.84%) 65,909,135,465 ( 17.70%)
instructions 517,001,168,497 ( 1.10) 516,210,976,158 ( 1.11) 511,374,038,613 ( 1.37)
branches 98,911,641,329 ( 585.796) 98,700,069,712 ( 591.583) 97,646,761,028 ( 728.712)
branch-misses 232,341,823 ( 0.23%) 199,256,308 ( 0.20%) 183,135,268 ( 0.19%)
per-cpu streams tend to cause significantly less stalled cycles; execute
less branches and hit less branch-misses.
perf stat reported execution time
4 streams 8 streams per-cpu
====================================================================
jobs1
seconds elapsed 20.
909073870 20.
875670495 20.
817838540
jobs2
seconds elapsed 18.
529488399 18.
720566469 16.
356103108
jobs3
seconds elapsed 18.
991159531 18.
991340812 16.
766216066
jobs4
seconds elapsed 19.
560643828 19.
551323547 16.
246621715
jobs5
seconds elapsed 24.
746498464 25.
221646740 20.
696112444
jobs6
seconds elapsed 28.
258181828 28.
289765505 22.
885688857
jobs7
seconds elapsed 32.
632490241 31.
909125381 26.
272753738
jobs8
seconds elapsed 35.
651403851 36.
027596308 29.
108024711
jobs9
seconds elapsed 40.
569362365 40.
024227989 32.
898204012
jobs10
seconds elapsed 44.
673112304 43.
874898137 35.
632952191
Please see
Link: http://marc.info/?l=linux-kernel&m=146166970727530
Link: http://marc.info/?l=linux-kernel&m=146174716719650
for more test results (under low memory conditions).
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Suggested-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
da9556a2367cf2261ab4d3e100693c82fb1ddb26)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: I1af1a466f0ac3f74f9c36f06685111ccef0f4ec4
Sergey Senozhatsky [Fri, 20 May 2016 23:59:48 +0000 (16:59 -0700)]
BACKPORT: zsmalloc: require GFP in zs_malloc()
Pass GFP flags to zs_malloc() instead of using a fixed mask supplied to
zs_create_pool(), so we can be more flexible, but, more importantly, we
need this to switch zram to per-cpu compression streams -- zram will try
to allocate handle with preemption disabled in a fast path and switch to
a slow path (using different gfp mask) if the fast one has failed.
Apart from that, this also align zs_malloc() interface with zspool/zbud.
[sergey.senozhatsky@gmail.com: pass GFP flags to zs_malloc() instead of using a fixed mask]
Link: http://lkml.kernel.org/r/20160429150942.GA637@swordfish
Link: http://lkml.kernel.org/r/20160429150942.GA637@swordfish
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
d0d8da2dc49dfdfe1d788eaf4d55eb5d4964d926)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: I31276c9351be21a4ed588681b332e98142b76526
Sergey Senozhatsky [Thu, 14 Jan 2016 23:22:35 +0000 (15:22 -0800)]
UPSTREAM: zram/zcomp: do not zero out zcomp private pages
Do not __GFP_ZERO allocated zcomp ->private pages. We keep allocated
streams around and use them for read/write requests, so we supply a
zeroed out ->private to compression algorithm as a scratch buffer only
once -- the first time we use that stream. For the rest of IO requests
served by this stream ->private usually contains some temporarily data
from the previous requests.
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
e02d238c9852a91b30da9ea32ce36d1416cdc683)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: I911832da703f596998a4139d6033ef1564848c9e
Minchan Kim [Thu, 14 Jan 2016 23:22:32 +0000 (15:22 -0800)]
UPSTREAM: zram: pass gfp from zcomp frontend to backend
Each zcomp backend uses own gfp flag but it's pointless because the
context they could be called is driven by upper layer(ie, zcomp
frontend). As well, zcomp frondend could call them in different
context. One context(ie, zram init part) is it should be better to make
sure successful allocation other context(ie, further stream allocation
part for accelarating I/O speed) is just optional so let's pass gfp down
from driver (ie, zcomp frontend) like normal MM convention.
[sergey.senozhatsky@gmail.com: add missing __vmalloc zero and highmem gfps]
Signed-off-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit
75d8947a36d0c9aedd69118d1f14bf424005c7c2)
Signed-off-by: Peter Kalauskas <peskal@google.com>
Bug:
112488418
Change-Id: I572d0565de5aff94ebe0782eba9d34f9c9862060
Stricted [Wed, 28 Aug 2019 15:26:26 +0000 (15:26 +0000)]
gud: fix mobicore initialization
* backported from s9
Change-Id: I48476e899495490ded64a9e173e3daa3c4cdafa0
Stricted [Mon, 26 Aug 2019 18:03:34 +0000 (18:03 +0000)]
video: mdnie: fix lux node permissions