[rh7,v5,7/9] mm/mem_cgroup_iter: Don't bother saving/checking 'dead_count' anymore

Submitted by Konstantin Khorenko on Feb. 26, 2021, 2:26 p.m.

Details

Message ID 20210226142605.7789-8-khorenko@virtuozzo.com
State New
Series "mm/mem_cgroup_iter: Reduce the number of iterator restarts upon cgroup removals"
Headers show

Commit Message

Konstantin Khorenko Feb. 26, 2021, 2:26 p.m.
As we've enhanced mem_cgroup_iter_invalidate() to NULL-ify
'last_visited' if it stored dying cgroups, we can be sure
iter->last_visited always contain valid pointer to memcg
(or NULL surely).

So just skip extra check iter->last_dead_count vs root->dead_count,
it's not needed anymore.

Note: the patch is prepared as small as possible - for review simplicity.
Cleanup patch will follow.

https://jira.sw.ru/browse/PSBM-123655

Signed-off-by: Konstantin Khorenko <khorenko@virtuozzo.com>
Reviewed-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
 mm/memcontrol.c | 6 ------
 1 file changed, 6 deletions(-)

Patch hide | download patch | download mbox

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 8c07fd18e814..3affd9931d77 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1653,9 +1653,6 @@  mem_cgroup_iter_load(struct mem_cgroup_reclaim_iter *iter,
 	 * offlining.  The RCU lock ensures the object won't be
 	 * released, tryget will fail if we lost the race.
 	 */
-	*sequence = atomic_read(&root->dead_count);
-	if (iter->last_dead_count == *sequence) {
-		smp_rmb();
 		position = rcu_dereference(iter->last_visited);
 
 		/*
@@ -1668,7 +1665,6 @@  mem_cgroup_iter_load(struct mem_cgroup_reclaim_iter *iter,
 				!css_tryget(&position->css))
 
 			position = NULL;
-	}
 	return position;
 }
 
@@ -1684,8 +1680,6 @@  static void mem_cgroup_iter_update(struct mem_cgroup_reclaim_iter *iter,
 	 * 'last_visited' is NULLed.
 	 */
 	rcu_assign_pointer(iter->last_visited, new_position);
-	smp_wmb();
-	iter->last_dead_count = sequence;
 
 	/* root reference counting symmetric to mem_cgroup_iter_load */
 	if (last_visited && last_visited != root)