linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RFC] mm/slub: Reduce memory consumption in extreme scenarios
@ 2023-03-07  8:28 Chen Jun
  2023-03-07 14:20 ` Hyeonggon Yoo
  0 siblings, 1 reply; 5+ messages in thread
From: Chen Jun @ 2023-03-07  8:28 UTC (permalink / raw)
  To: linux-kernel, linux-mm, cl, penberg, rientjes, iamjoonsoo.kim,
	akpm, vbabka
  Cc: xuqiang36, chenjun102

If call kmalloc_node with NO __GFP_THISNODE and node[A] with no memory.
Slub will alloc a slub page which is not belong to A, and put the page
to kmem_cache_node[page_to_nid(page)]. The page can not be reused
at next calling, because NULL will be get from get_partical().
That make kmalloc_node consume more memory.

On qemu with 4 numas and each numa has 1G memory, Write a test ko
to call kmalloc_node(196, 0xd20c0, 3) for 5 * 1024 * 1024 times.

cat /proc/slabinfo shows:
kmalloc-256       4302317 15151808    256   32    2 : tunables..

the total objects is much more then active objects.

After this patch, cat /prac/slubinfo shows:
kmalloc-256       5244950 5245088    256   32    2 : tunables..

Signed-off-by: Chen Jun <chenjun102@huawei.com>
---
 mm/slub.c | 17 ++++++++++++++---
 1 file changed, 14 insertions(+), 3 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 39327e98fce3..c0090a5de54e 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2384,7 +2384,7 @@ static void *get_partial(struct kmem_cache *s, int node, struct partial_context
 		searchnode = numa_mem_id();
 
 	object = get_partial_node(s, get_node(s, searchnode), pc);
-	if (object || node != NUMA_NO_NODE)
+	if (object || (node != NUMA_NO_NODE && (pc->flags & __GFP_THISNODE)))
 		return object;
 
 	return get_any_partial(s, pc);
@@ -3069,6 +3069,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
 	struct slab *slab;
 	unsigned long flags;
 	struct partial_context pc;
+	int try_thisndoe = 0;
 
 	stat(s, ALLOC_SLOWPATH);
 
@@ -3181,8 +3182,12 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
 	}
 
 new_objects:
-
 	pc.flags = gfpflags;
+
+	/* Try to get page from specific node even if __GFP_THISNODE is not set */
+	if (node != NUMA_NO_NODE && !(gfpflags & __GFP_THISNODE) && try_thisnode)
+			pc.flags |= __GFP_THISNODE;
+
 	pc.slab = &slab;
 	pc.orig_size = orig_size;
 	freelist = get_partial(s, node, &pc);
@@ -3190,10 +3195,16 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
 		goto check_new_slab;
 
 	slub_put_cpu_ptr(s->cpu_slab);
-	slab = new_slab(s, gfpflags, node);
+	slab = new_slab(s, pc.flags, node);
 	c = slub_get_cpu_ptr(s->cpu_slab);
 
 	if (unlikely(!slab)) {
+		/* Try to get page from any other node */
+		if (node != NUMA_NO_NODE && !(gfpflags & __GFP_THISNODE) && try_thisnode) {
+			try_thisnode = 0;
+			goto new_objects;
+		}
+
 		slab_out_of_memory(s, gfpflags, node);
 		return NULL;
 	}
-- 
2.17.1



^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2023-03-09  2:16 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-07  8:28 [RFC] mm/slub: Reduce memory consumption in extreme scenarios Chen Jun
2023-03-07 14:20 ` Hyeonggon Yoo
2023-03-08  7:16   ` chenjun (AM)
2023-03-08 13:37     ` Hyeonggon Yoo
2023-03-09  2:15       ` chenjun (AM)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox