[RH7,07/11] net: Move mutex_unlock() in cleanup_net() up

Submitted by Kirill Tkhai on May 27, 2020, 3:54 p.m.

Details

Message ID 159059485350.408928.7638845700943725834.stgit@localhost.localdomain
State New
Series "Parallel per-net init/exit"
Headers show

Commit Message

Kirill Tkhai May 27, 2020, 3:54 p.m.
ms commit bcab1ddd9b2b

net_sem protects from pernet_list changing, while
ops_free_list() makes simple kfree(), and it can't
race with other pernet_operations callbacks.

So we may release net_mutex earlier then it was.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Acked-by: Andrei Vagin <avagin@virtuozzo.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
---
 net/core/net_namespace.c |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

Patch hide | download patch | download mbox

diff --git a/net/core/net_namespace.c b/net/core/net_namespace.c
index e8aa434dbc16..15bfd1306141 100644
--- a/net/core/net_namespace.c
+++ b/net/core/net_namespace.c
@@ -531,11 +531,12 @@  static void cleanup_net(struct work_struct *work)
 	list_for_each_entry_reverse(ops, &pernet_list, list)
 		ops_exit_list(ops, &net_exit_list);
 
+	mutex_unlock(&net_mutex);
+
 	/* Free the net generic variables */
 	list_for_each_entry_reverse(ops, &pernet_list, list)
 		ops_free_list(ops, &net_exit_list);
 
-	mutex_unlock(&net_mutex);
 	up_read(&net_sem);
 
 	list_for_each_entry(net, &net_kill_list, cleanup_list) {