[RHEL7,COMMIT] ms/mm/vmscan.c: generalize shrink_slab() calls in shrink_node()

Submitted by Konstantin Khorenko on Sept. 5, 2018, 9:37 a.m.


Message ID 201809050937.w859bEbX010449@finist_ce7.work
State New
Series "Port "Improve shrink_slab() scalability" patchset"
Headers show

Commit Message

Konstantin Khorenko Sept. 5, 2018, 9:37 a.m.
The commit is pushed to "branch-rh7-3.10.0-862.11.6.vz7.71.x-ovz" and will appear at https://src.openvz.org/scm/ovz/vzkernel.git
after rh7-3.10.0-862.11.6.vz7.71.8
commit 4f28932735fbc476a6f5b1e667da7690ad8dec44
Author: Vladimir Davydov <vdavydov.dev@gmail.com>
Date:   Wed Sep 5 12:37:14 2018 +0300

    ms/mm/vmscan.c: generalize shrink_slab() calls in shrink_node()
    ms commit aeed1d325d42
    The patch makes shrink_slab() be called for root_mem_cgroup in the same
    way as it's called for the rest of cgroups.  This simplifies the logic
    and improves the readability.
    [ktkhai@virtuozzo.com: wrote changelog]
    Link: http://lkml.kernel.org/r/153063068338.1818.11496084754797453962.stgit@localhost.localdomain
    Signed-off-by: Vladimir Davydov <vdavydov.dev@gmail.com>
    Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
    Tested-by: Shakeel Butt <shakeelb@google.com>
    Cc: Al Viro <viro@zeniv.linux.org.uk>
    Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
    Cc: Chris Wilson <chris@chris-wilson.co.uk>
    Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Cc: Guenter Roeck <linux@roeck-us.net>
    Cc: "Huang, Ying" <ying.huang@intel.com>
    Cc: Johannes Weiner <hannes@cmpxchg.org>
    Cc: Josef Bacik <jbacik@fb.com>
    Cc: Li RongQing <lirongqing@baidu.com>
    Cc: Matthew Wilcox <willy@infradead.org>
    Cc: Matthias Kaehlcke <mka@chromium.org>
    Cc: Mel Gorman <mgorman@techsingularity.net>
    Cc: Michal Hocko <mhocko@kernel.org>
    Cc: Minchan Kim <minchan@kernel.org>
    Cc: Philippe Ombredanne <pombredanne@nexb.com>
    Cc: Roman Gushchin <guro@fb.com>
    Cc: Sahitya Tummala <stummala@codeaurora.org>
    Cc: Stephen Rothwell <sfr@canb.auug.org.au>
    Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: Waiman Long <longman@redhat.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
    Patchset description:
    Port "Improve shrink_slab() scalability" patchset
    This is backport of the patchset improving the performance
    of overcommited containers with many memcgs and mounts.
    The original set is in Linus' tree, and came into 4.19-rc1.
    Kirill Tkhai (12):
          mm: assign id to every memcg-aware shrinker
          mm/memcontrol.c: move up for_each_mem_cgroup{, _tree} defines
          mm, memcg: assign memcg-aware shrinkers bitmap to memcg
          fs: propagate shrinker::id to list_lru
          mm/list_lru.c: add memcg argument to list_lru_from_kmem()
          mm/list_lru: pass dst_memcg argument to memcg_drain_list_lru_node()
          mm/list_lru.c: pass lru argument to memcg_drain_list_lru_node()
          mm/list_lru.c: set bit in memcg shrinker bitmap on first list_lru item appearance
          mm/memcontrol.c: export mem_cgroup_is_root()
          mm/vmscan.c: iterate only over charged shrinkers during memcg shrink_slab()
          mm: add SHRINK_EMPTY shrinker methods return value
          mm/vmscan.c: clear shrinker bit if there are no objects related to memcg
    Vladimir Davydov (1):
          mm/vmscan.c: generalize shrink_slab() calls in shrink_node()
 mm/vmscan.c | 14 ++++----------
 1 file changed, 4 insertions(+), 10 deletions(-)

Patch hide | download patch | download mbox

diff --git a/mm/vmscan.c b/mm/vmscan.c
index df792f5444f7..bd2d62dabdd9 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -517,7 +517,7 @@  static unsigned long shrink_slab(gfp_t gfp_mask, int nid,
 	if (unlikely(test_tsk_thread_flag(current, TIF_MEMDIE)))
 		return 0;
-	if (memcg && !mem_cgroup_is_root(memcg))
+	if (!mem_cgroup_is_root(memcg))
 		return shrink_slab_memcg(gfp_mask, nid, memcg, priority);
 	if (!down_read_trylock(&shrinker_rwsem)) {
@@ -539,9 +539,6 @@  static unsigned long shrink_slab(gfp_t gfp_mask, int nid,
 			.for_drop_caches = for_drop_caches,
-		if (!!memcg != !!(shrinker->flags & SHRINKER_MEMCG_AWARE))
-			continue;
 		if (!(shrinker->flags & SHRINKER_NUMA_AWARE))
 			sc.nid = 0;
@@ -559,9 +556,10 @@  void drop_slab_node(int nid)
 	unsigned long freed;
 	do {
-		struct mem_cgroup *memcg = NULL;
+		struct mem_cgroup *memcg;
 		freed = 0;
+		memcg = mem_cgroup_iter(NULL, NULL, NULL);
 		do {
 			freed += shrink_slab(GFP_KERNEL, nid, memcg,
 					     0, true);
@@ -2517,7 +2515,7 @@  static void shrink_zone(struct zone *zone, struct scan_control *sc,
 				zone_lru_pages += lru_pages;
-			if (memcg && is_classzone) {
+			if (is_classzone) {
 				shrink_slab(slab_gfp, zone_to_nid(zone),
 					    memcg, sc->priority, false);
 				if (reclaim_state) {
@@ -2545,10 +2543,6 @@  static void shrink_zone(struct zone *zone, struct scan_control *sc,
 		} while ((memcg = mem_cgroup_iter(root, memcg, &reclaim)));
-		if (global_reclaim(sc) && is_classzone)
-			shrink_slab(slab_gfp, zone_to_nid(zone), NULL,
-				    sc->priority, false);
 		if (global_reclaim(sc)) {
 			 * If reclaim is isolating dirty pages under writeback, it implies