[rh7,v5,1/9] Revert "mm/memcg: fix css_tryget(), css_put() imbalance"

Submitted by Konstantin Khorenko on Feb. 26, 2021, 2:25 p.m.

Details

Message ID 20210226142605.7789-2-khorenko@virtuozzo.com
State New
Series "mm/mem_cgroup_iter: Reduce the number of iterator restarts upon cgroup removals"
Headers show

Commit Message

Konstantin Khorenko Feb. 26, 2021, 2:25 p.m.
This reverts commit 5f351790d598bbf014441a86e7081972086de61b.

We are going to get rid of seqlock 'iter->last_visited_lock',
so reverting the patch.

https://jira.sw.ru/browse/PSBM-123655
Signed-off-by: Konstantin Khorenko <khorenko@virtuozzo.com>
Reviewed-by: Kirill Tkhai <ktkhai@virtuozzo.com>

---
 mm/memcontrol.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

Patch hide | download patch | download mbox

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index e0a430908138..e5c5f64d6bb6 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1581,7 +1581,7 @@  mem_cgroup_iter_load(struct mem_cgroup_reclaim_iter *iter,
 		     struct mem_cgroup *root,
 		     int *sequence)
 {
-	struct mem_cgroup *position;
+	struct mem_cgroup *position = NULL;
 	unsigned seq;
 
 	/*
@@ -1594,7 +1594,6 @@  mem_cgroup_iter_load(struct mem_cgroup_reclaim_iter *iter,
 	 */
 	*sequence = atomic_read(&root->dead_count);
 retry:
-	position = NULL;
 	seq = read_seqbegin(&iter->last_visited_lock);
 	if (iter->last_dead_count == *sequence) {
 		position = READ_ONCE(iter->last_visited);