[rh7] mm: Move memcg_uncharge_kmem() to bottom of __memcg_uncharge_slab()

Submitted by Kirill Tkhai on May 7, 2020, 8:34 a.m.

Details

Message ID 158884042901.16077.13513680326491499063.stgit@localhost.localdomain
State New
Series "mm: Move memcg_uncharge_kmem() to bottom of __memcg_uncharge_slab()"
Headers show

Commit Message

Kirill Tkhai May 7, 2020, 8:34 a.m.
memcg_uncharge_kmem() potentially may put last css counter. After
that we can't touch memcg memory anymore.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
 mm/memcontrol.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Patch hide | download patch | download mbox

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index b4484567ce82..797cb8e6df6d 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3574,7 +3574,6 @@  void __memcg_uncharge_slab(struct kmem_cache *s, unsigned int nr_pages)
 	VM_BUG_ON(is_root_cache(s));
 	memcg = s->memcg_params.memcg;
 
-	memcg_uncharge_kmem(memcg, nr_pages);
 	if (s->flags & SLAB_RECLAIM_ACCOUNT) {
 		page_counter_uncharge(&memcg->dcache, nr_pages);
 		idx = MEM_CGROUP_STAT_SLAB_RECLAIMABLE;
@@ -3583,6 +3582,7 @@  void __memcg_uncharge_slab(struct kmem_cache *s, unsigned int nr_pages)
 		idx = MEM_CGROUP_STAT_SLAB_UNRECLAIMABLE;
 		this_cpu_sub(memcg->stat->count[idx], nr_pages);
 	}
+	memcg_uncharge_kmem(memcg, nr_pages);
 }
 
 /*