[Devel,RHEL7,COMMIT] mm: Count list_lru_one::nr_items lockless

Submitted by Konstantin Khorenko on Aug. 31, 2017, 3:25 p.m.


Message ID 201708311525.v7VFPKCp018341@finist_ce7.work
State New
Series "Make count list_lru_one::nr_items lockless"
Headers show

Commit Message

Konstantin Khorenko Aug. 31, 2017, 3:25 p.m.
The commit is pushed to "branch-rh7-3.10.0-514.26.1.vz7.35.x-ovz" and will appear at https://src.openvz.org/scm/ovz/vzkernel.git
after rh7-3.10.0-514.26.1.vz7.35.5
commit 6fd774dbf6fd05eca8cfa192753bf35dac694368
Author: Kirill Tkhai <ktkhai@virtuozzo.com>
Date:   Thu Aug 31 18:25:20 2017 +0300

    mm: Count list_lru_one::nr_items lockless
    During the reclaiming slab of a memcg, shrink_slab iterates
    over all registered shrinkers in the system, and tries to count
    and consume objects related to the cgroup. In case of memory
    pressure, this behaves bad: I observe high system time and
    time spent in list_lru_count_one() for many processes:
    0,50%  nixstatsagent  [kernel.vmlinux]  [k] _raw_spin_lock                [k] _raw_spin_lock
    0,26%  nixstatsagent  [kernel.vmlinux]  [k] shrink_slab                   [k] shrink_slab
    0,23%  nixstatsagent  [kernel.vmlinux]  [k] super_cache_count             [k] super_cache_count
    0,15%  nixstatsagent  [kernel.vmlinux]  [k] __list_lru_count_one.isra.2   [k] _raw_spin_lock
    0,15%  nixstatsagent  [kernel.vmlinux]  [k] list_lru_count_one            [k] __list_lru_count_one.isra.2
    0,94%  mysqld         [kernel.vmlinux]  [k] _raw_spin_lock                [k] _raw_spin_lock
    0,57%  mysqld         [kernel.vmlinux]  [k] shrink_slab                   [k] shrink_slab
    0,51%  mysqld         [kernel.vmlinux]  [k] super_cache_count             [k] super_cache_count
    0,32%  mysqld         [kernel.vmlinux]  [k] __list_lru_count_one.isra.2   [k] _raw_spin_lock
    0,32%  mysqld         [kernel.vmlinux]  [k] list_lru_count_one            [k] __list_lru_count_one.isra.2
    0,73%  sshd           [kernel.vmlinux]  [k] _raw_spin_lock                [k] _raw_spin_lock
    0,35%  sshd           [kernel.vmlinux]  [k] shrink_slab                   [k] shrink_slab
    0,32%  sshd           [kernel.vmlinux]  [k] super_cache_count             [k] super_cache_count
    0,21%  sshd           [kernel.vmlinux]  [k] __list_lru_count_one.isra.2   [k] _raw_spin_lock
    0,21%  sshd           [kernel.vmlinux]  [k] list_lru_count_one            [k] __list_lru_count_one.isra.2
    This patch aims to make super_cache_count() more effective. It
    makes __list_lru_count_one() count nr_items lockless to minimize
    overhead introducing by locking operation, and to make parallel
    reclaims more scalable.
    The lock won't be taken on shrinker::count_objects(),
    it would be taken only for the real shrink by the thread,
    who realizes it.
    Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
    Acked-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
 mm/list_lru.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Patch hide | download patch | download mbox

diff --git a/mm/list_lru.c b/mm/list_lru.c
index b166eff..5adc6621 100644
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -160,10 +160,10 @@  static unsigned long __list_lru_count_one(struct list_lru *lru,
 	struct list_lru_one *l;
 	unsigned long count;
-	spin_lock(&nlru->lock);
+	rcu_read_lock();
 	l = list_lru_from_memcg_idx(nlru, memcg_idx);
 	count = l->nr_items;
-	spin_unlock(&nlru->lock);
+	rcu_read_unlock();
 	return count;