[RHEL7,COMMIT] ms/mm, memcg: assign memcg-aware shrinkers bitmap to memcg

Submitted by Konstantin Khorenko on Sept. 5, 2018, 9:37 a.m.

Details

Message ID 201809050937.w859bADL010044@finist_ce7.work
State New
Series "Port "Improve shrink_slab() scalability" patchset"
Headers show

Commit Message

Konstantin Khorenko Sept. 5, 2018, 9:37 a.m.
The commit is pushed to "branch-rh7-3.10.0-862.11.6.vz7.71.x-ovz" and will appear at https://src.openvz.org/scm/ovz/vzkernel.git
after rh7-3.10.0-862.11.6.vz7.71.8
------>
commit 1a040d20f7608bd5cb185029981cffb850c0d16c
Author: Kirill Tkhai <ktkhai@virtuozzo.com>
Date:   Wed Sep 5 12:37:10 2018 +0300

    ms/mm, memcg: assign memcg-aware shrinkers bitmap to memcg
    
    ms commit 0a4465d34028
    
    Imagine a big node with many cpus, memory cgroups and containers.  Let
    we have 200 containers, every container has 10 mounts, and 10 cgroups.
    All container tasks don't touch foreign containers mounts.  If there is
    intensive pages write, and global reclaim happens, a writing task has to
    iterate over all memcgs to shrink slab, before it's able to go to
    shrink_page_list().
    
    Iteration over all the memcg slabs is very expensive: the task has to
    visit 200 * 10 = 2000 shrinkers for every memcg, and since there are
    2000 memcgs, the total calls are 2000 * 2000 = 4000000.
    
    So, the shrinker makes 4 million do_shrink_slab() calls just to try to
    isolate SWAP_CLUSTER_MAX pages in one of the actively writing memcg via
    shrink_page_list().  I've observed a node spending almost 100% in
    kernel, making useless iteration over already shrinked slab.
    
    This patch adds bitmap of memcg-aware shrinkers to memcg.  The size of
    the bitmap depends on bitmap_nr_ids, and during memcg life it's
    maintained to be enough to fit bitmap_nr_ids shrinkers.  Every bit in
    the map is related to corresponding shrinker id.
    
    Next patches will maintain set bit only for really charged memcg.  This
    will allow shrink_slab() to increase its performance in significant way.
    See the last patch for the numbers.
    
    [ktkhai@virtuozzo.com: v9]
    Link: http://lkml.kernel.org/r/153112549031.4097.3576147070498769979.stgit@localhost.localdomain
    [ktkhai@virtuozzo.com: add comment to mem_cgroup_css_online()]
    Link: http://lkml.kernel.org/r/521f9e5f-c436-b388-fe83-4dc870bfb489@virtuozzo.com
    Link: http://lkml.kernel.org/r/153063056619.1818.12550500883688681076.stgit@localhost.localdomain
    Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
    
    Acked-by: Vladimir Davydov <vdavydov.dev@gmail.com>
    Tested-by: Shakeel Butt <shakeelb@google.com>
    Cc: Al Viro <viro@zeniv.linux.org.uk>
    Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
    Cc: Chris Wilson <chris@chris-wilson.co.uk>
    Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Cc: Guenter Roeck <linux@roeck-us.net>
    Cc: "Huang, Ying" <ying.huang@intel.com>
    Cc: Johannes Weiner <hannes@cmpxchg.org>
    Cc: Josef Bacik <jbacik@fb.com>
    Cc: Li RongQing <lirongqing@baidu.com>
    Cc: Matthew Wilcox <willy@infradead.org>
    Cc: Matthias Kaehlcke <mka@chromium.org>
    Cc: Mel Gorman <mgorman@techsingularity.net>
    Cc: Michal Hocko <mhocko@kernel.org>
    Cc: Minchan Kim <minchan@kernel.org>
    Cc: Philippe Ombredanne <pombredanne@nexb.com>
    Cc: Roman Gushchin <guro@fb.com>
    Cc: Sahitya Tummala <stummala@codeaurora.org>
    Cc: Stephen Rothwell <sfr@canb.auug.org.au>
    Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: Waiman Long <longman@redhat.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
    
    =====================
    Patchset description:
    
    Port "Improve shrink_slab() scalability" patchset
    
    https://jira.sw.ru/browse/PSBM-88027
    
    This is backport of the patchset improving the performance
    of overcommited containers with many memcgs and mounts.
    The original set is in Linus' tree, and came into 4.19-rc1.
    
    Kirill Tkhai (12):
          mm: assign id to every memcg-aware shrinker
          mm/memcontrol.c: move up for_each_mem_cgroup{, _tree} defines
          mm, memcg: assign memcg-aware shrinkers bitmap to memcg
          fs: propagate shrinker::id to list_lru
          mm/list_lru.c: add memcg argument to list_lru_from_kmem()
          mm/list_lru: pass dst_memcg argument to memcg_drain_list_lru_node()
          mm/list_lru.c: pass lru argument to memcg_drain_list_lru_node()
          mm/list_lru.c: set bit in memcg shrinker bitmap on first list_lru item appearance
          mm/memcontrol.c: export mem_cgroup_is_root()
          mm/vmscan.c: iterate only over charged shrinkers during memcg shrink_slab()
          mm: add SHRINK_EMPTY shrinker methods return value
          mm/vmscan.c: clear shrinker bit if there are no objects related to memcg
    
    Vladimir Davydov (1):
          mm/vmscan.c: generalize shrink_slab() calls in shrink_node()
---
 include/linux/memcontrol.h |  10 ++++
 mm/memcontrol.c            | 131 +++++++++++++++++++++++++++++++++++++++++++++
 mm/vmscan.c                |   8 ++-
 3 files changed, 148 insertions(+), 1 deletion(-)

Patch hide | download patch | download mbox

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 995b9166c904..4d881adf12b5 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -45,6 +45,15 @@  struct mem_cgroup_reclaim_cookie {
 	unsigned int generation;
 };
 
+/*
+ * Bitmap of shrinker::id corresponding to memcg-aware shrinkers,
+ * which have elements charged to this memcg.
+ */
+struct memcg_shrinker_map {
+	struct rcu_head rcu;
+	unsigned long map[0];
+};
+
 /*
  * Reclaim flags for mem_cgroup_hierarchical_reclaim
  */
@@ -623,6 +632,7 @@  static __always_inline struct mem_cgroup *mem_cgroup_from_kmem(void *ptr)
 		return NULL;
 	return __mem_cgroup_from_kmem(ptr);
 }
+extern int memcg_expand_shrinker_maps(int new_id);
 #else
 #define for_each_memcg_cache_index(_idx)	\
 	for (; NULL; )
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index b07df9b59fd2..6c8b664cef06 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -213,6 +213,9 @@  struct mem_cgroup_per_zone {
 
 struct mem_cgroup_per_node {
 	struct mem_cgroup_per_zone zoneinfo[MAX_NR_ZONES];
+#ifdef CONFIG_MEMCG_KMEM
+	struct memcg_shrinker_map __rcu	*shrinker_map;
+#endif
 };
 
 struct mem_cgroup_lru_info {
@@ -421,6 +424,12 @@  enum {
 	KMEM_ACCOUNTED_DEAD, /* dead memcg with pending kmem charges */
 };
 
+static struct mem_cgroup_per_node *
+mem_cgroup_nodeinfo(struct mem_cgroup *memcg, int nid)
+{
+        return memcg->info.nodeinfo[nid];
+}
+
 #ifdef CONFIG_MEMCG_KMEM
 bool memcg_kmem_is_active(struct mem_cgroup *memcg)
 {
@@ -707,10 +716,123 @@  static void disarm_kmem_keys(struct mem_cgroup *memcg)
 	 */
 	WARN_ON(page_counter_read(&memcg->kmem));
 }
+
+static int memcg_shrinker_map_size;
+static DEFINE_MUTEX(memcg_shrinker_map_mutex);
+
+static void memcg_free_shrinker_map_rcu(struct rcu_head *head)
+{
+	kvfree(container_of(head, struct memcg_shrinker_map, rcu));
+}
+
+static int memcg_expand_one_shrinker_map(struct mem_cgroup *memcg,
+					 int size, int old_size)
+{
+	struct memcg_shrinker_map *new, *old;
+	int nid;
+
+	lockdep_assert_held(&memcg_shrinker_map_mutex);
+
+	for_each_node(nid) {
+		old = rcu_dereference_protected(
+			mem_cgroup_nodeinfo(memcg, nid)->shrinker_map, true);
+		/* Not yet online memcg */
+		if (!old)
+			return 0;
+
+		new = kvmalloc(sizeof(*new) + size, GFP_KERNEL);
+		if (!new)
+			return -ENOMEM;
+
+		/* Set all old bits, clear all new bits */
+		memset(new->map, (int)0xff, old_size);
+		memset((void *)new->map + old_size, 0, size - old_size);
+
+		rcu_assign_pointer(memcg->info.nodeinfo[nid]->shrinker_map, new);
+		call_rcu(&old->rcu, memcg_free_shrinker_map_rcu);
+	}
+
+	return 0;
+}
+
+static void memcg_free_shrinker_maps(struct mem_cgroup *memcg)
+{
+	struct mem_cgroup_per_node *pn;
+	struct memcg_shrinker_map *map;
+	int nid;
+
+	if (mem_cgroup_is_root(memcg))
+		return;
+
+	for_each_node(nid) {
+		pn = mem_cgroup_nodeinfo(memcg, nid);
+		map = rcu_dereference_protected(pn->shrinker_map, true);
+		if (map)
+			kvfree(map);
+		rcu_assign_pointer(pn->shrinker_map, NULL);
+	}
+}
+
+static int memcg_alloc_shrinker_maps(struct mem_cgroup *memcg)
+{
+	struct memcg_shrinker_map *map;
+	int nid, size, ret = 0;
+
+	if (mem_cgroup_is_root(memcg))
+		return 0;
+
+	mutex_lock(&memcg_shrinker_map_mutex);
+	size = memcg_shrinker_map_size;
+	for_each_node(nid) {
+		map = kvzalloc(sizeof(*map) + size, GFP_KERNEL);
+		if (!map) {
+			memcg_free_shrinker_maps(memcg);
+			ret = -ENOMEM;
+			break;
+		}
+		rcu_assign_pointer(memcg->info.nodeinfo[nid]->shrinker_map, map);
+	}
+	mutex_unlock(&memcg_shrinker_map_mutex);
+
+	return ret;
+}
+
+int memcg_expand_shrinker_maps(int new_id)
+{
+	int size, old_size, ret = 0;
+	struct mem_cgroup *memcg;
+
+	size = DIV_ROUND_UP(new_id + 1, BITS_PER_LONG) * sizeof(unsigned long);
+	old_size = memcg_shrinker_map_size;
+	if (size <= old_size)
+		return 0;
+
+	mutex_lock(&memcg_shrinker_map_mutex);
+	if (!root_mem_cgroup)
+		goto unlock;
+
+	for_each_mem_cgroup(memcg) {
+		if (mem_cgroup_is_root(memcg))
+			continue;
+		ret = memcg_expand_one_shrinker_map(memcg, size, old_size);
+		if (ret)
+			goto unlock;
+	}
+unlock:
+	if (!ret)
+		memcg_shrinker_map_size = size;
+	mutex_unlock(&memcg_shrinker_map_mutex);
+	return ret;
+}
 #else
 static void disarm_kmem_keys(struct mem_cgroup *memcg)
 {
 }
+static int memcg_alloc_shrinker_maps(struct mem_cgroup *memcg)
+{
+	return 0;
+}
+static void memcg_free_shrinker_maps(struct mem_cgroup *memcg) { }
 #endif /* CONFIG_MEMCG_KMEM */
 
 static void disarm_static_keys(struct mem_cgroup *memcg)
@@ -6159,6 +6281,14 @@  mem_cgroup_css_online(struct cgroup *cont)
 	}
 	mutex_unlock(&memcg_create_mutex);
 
+	/*
+	 * A memcg must be visible for memcg_expand_shrinker_maps()
+	 * by the time the maps are allocated. So, we allocate maps
+	 * here, when for_each_mem_cgroup() can't skip it.
+	 */
+	if (memcg_alloc_shrinker_maps(memcg))
+		return -ENOMEM;
+
 	return memcg_init_kmem(memcg, &mem_cgroup_subsys);
 }
 
@@ -6278,6 +6408,7 @@  static void mem_cgroup_css_free(struct cgroup *cont)
 	mem_cgroup_reparent_charges(memcg);
 
 	memcg_destroy_kmem(memcg);
+	memcg_free_shrinker_maps(memcg);
 	__mem_cgroup_free(memcg);
 }
 
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 418154200a76..a0ed282035b9 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -170,9 +170,15 @@  static int register_memcg_shrinker(struct shrinker *shrinker)
 	id = idr_alloc(&shrinker_idr, shrinker, 0, 0, GFP_KERNEL);
 	if (id < 0)
 		goto unlock;
+	if (id >= shrinker_nr_max) {
+		if (memcg_expand_shrinker_maps(id)) {
+			idr_remove(&shrinker_idr, id);
+			goto unlock;
+		}
 
-	if (id >= shrinker_nr_max)
 		shrinker_nr_max = id + 1;
+	}
+
 	shrinker->id = id;
 	ret = 0;
 unlock: