By default we set DCACHE_REFERENCED and I_REFERENCED on any dentry or
inode we create. This is problematic as this means that it takes two
trips through the LRU for any of these objects to be reclaimed,
regardless of their actual lifetime. With enough pressure from these
caches we can easily evict our working set from page cache with single
use objects. So instead only set *REFERENCED if we've already been
added to the LRU list. This means that we've been touched since the
first time we were accessed, and so more likely to need to hang out in
cache.
To illustrate this issue I wrote the following scripts
https://github.com/josefbacik/debug-scripts/tree/master/cache-pressure
on my test box. It is a single socket 4 core CPU with 16gib of RAM and
I tested on an Intel 2tib NVME drive. The cache-pressure.sh script
creates a new file system and creates 2 6.5gib files in order to take up
13gib of the 16gib of ram with pagecache. Then it runs a test program
that reads these 2 files in a loop, and keeps track of how often it has
to read bytes for each loop. On an ideal system with no pressure we
should have to read 0 bytes indefinitely. The second thing this script
does is start a fs_mark job that creates a ton of 0 length files,
putting pressure on the system with slab only allocations. On exit the
script prints out how many bytes were read by the read-file program.
The results are as follows
Without patch:
/mnt/btrfs-test/reads/file1: total read during loops
27262988288
/mnt/btrfs-test/reads/file2: total read during loops
27262976000
With patch:
/mnt/btrfs-test/reads/file2: total read during loops
18640457728
/mnt/btrfs-test/reads/file1: total read during loops
9565376512
This patch results in a 50% reduction of the amount of pages evicted
from our working set.
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
{
if (unlikely(!(dentry->d_flags & DCACHE_LRU_LIST)))
d_lru_add(dentry);
+ else if (unlikely(!(dentry->d_flags & DCACHE_REFERENCED)))
+ dentry->d_flags |= DCACHE_REFERENCED;
}
/**
goto kill_it;
}
- if (!(dentry->d_flags & DCACHE_REFERENCED))
- dentry->d_flags |= DCACHE_REFERENCED;
dentry_lru_add(dentry);
dentry->d_lockref.count--;
{
if (list_lru_add(&inode->i_sb->s_inode_lru, &inode->i_lru))
this_cpu_inc(nr_unused);
+ else
+ inode->i_state |= I_REFERENCED;
}
/*
drop = generic_drop_inode(inode);
if (!drop && (sb->s_flags & MS_ACTIVE)) {
- inode->i_state |= I_REFERENCED;
inode_add_lru(inode);
spin_unlock(&inode->i_lock);
return;