[rh7] mm: Allocate shrinker_map on appropriate NUMA node

Submitted by Kirill Tkhai on Feb. 11, 2020, 8:19 a.m.

Details

Message ID 158140918665.816448.2358597088750064439.stgit@localhost.localdomain
State New
Series "mm: Allocate shrinker_map on appropriate NUMA node"
Headers show

Commit Message

Kirill Tkhai Feb. 11, 2020, 8:19 a.m.
Despite shrinker_map may be touched from any cpu
(e.g., a bit there may be set by a task running
everywhere); kswapd is always bound to specific
node. So, we will allocate shrinker_map from
related NUMA node to respect its NUMA locality.
Also, this follows generic way we use for allocation
memcg's per-node data.

This goes to ms: https://patchwork.kernel.org/patch/11360241/

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
 mm/memcontrol.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Patch hide | download patch | download mbox

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 0514f9b2b230..6e5b914bb53f 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -747,7 +747,7 @@  static int memcg_expand_one_shrinker_map(struct mem_cgroup *memcg,
 		if (!old)
 			return 0;
 
-		new = kvmalloc(sizeof(*new) + size, GFP_KERNEL);
+		new = kvmalloc_node(sizeof(*new) + size, GFP_KERNEL, nid);
 		if (!new)
 			return -ENOMEM;
 
@@ -792,7 +792,7 @@  static int memcg_alloc_shrinker_maps(struct mem_cgroup *memcg)
 	mutex_lock(&memcg_shrinker_map_mutex);
 	size = memcg_shrinker_map_size;
 	for_each_node(nid) {
-		map = kvzalloc(sizeof(*map) + size, GFP_KERNEL);
+		map = kvzalloc_node(sizeof(*map) + size, GFP_KERNEL, nid);
 		if (!map) {
 			memcg_free_shrinker_maps(memcg);
 			ret = -ENOMEM;