linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Hao Li <hao.li@linux.dev>
To: "Vlastimil Babka (SUSE)" <vbabka@kernel.org>
Cc: Ming Lei <ming.lei@redhat.com>, Harry Yoo <harry.yoo@oracle.com>,
	 Andrew Morton <akpm@linux-foundation.org>,
	Christoph Lameter <cl@gentwo.org>,
	 David Rientjes <rientjes@google.com>,
	Roman Gushchin <roman.gushchin@linux.dev>,
	linux-mm@kvack.org,  linux-kernel@vger.kernel.org
Subject: Re: [PATCH 2/3] slab: create barns for online memoryless nodes
Date: Thu, 19 Mar 2026 15:01:02 +0800	[thread overview]
Message-ID: <jwlqiygxlch5gq6g3vkwynrpndjwcoxwice2hmvim3tvdfubp4@je5ldr7pa5f6> (raw)
In-Reply-To: <4659c675-6949-4295-b385-1ab26921a975@kernel.org>

On Wed, Mar 18, 2026 at 01:11:58PM +0100, Vlastimil Babka (SUSE) wrote:
> On 3/18/26 10:27, Hao Li wrote:
> > On Wed, Mar 11, 2026 at 09:25:56AM +0100, Vlastimil Babka (SUSE) wrote:
> >> Ming Lei has reported [1] a performance regression due to replacing cpu
> >> (partial) slabs with sheaves. With slub stats enabled, a large amount of
> >> slowpath allocations were observed. The affected system has 8 online
> >> NUMA nodes but only 2 have memory.
> >> 
> >> For sheaves to work effectively on given cpu, its NUMA node has to have
> >> struct node_barn allocated. Those are currently only allocated on nodes
> >> with memory (N_MEMORY) where kmem_cache_node also exist as the goal is
> >> to cache only node-local objects. But in order to have good performance
> >> on a memoryless node, we need its barn to exist and use sheaves to cache
> >> non-local objects (as no local objects can exist anyway).
> >> 
> >> Therefore change the implementation to allocate barns on all online
> >> nodes, tracked in a new nodemask slab_barn_nodes. Also add a cpu hotplug
> >> callback as that's when a memoryless node can become online.
> >> 
> >> Change rcu_sheaf->node assignment to numa_node_id() so it's returned to
> >> the barn of the local cpu's (potentially memoryless) node, and not to
> >> the nearest node with memory anymore.
> >> 
> >> Reported-by: Ming Lei <ming.lei@redhat.com>
> >> Link: https://lore.kernel.org/all/aZ0SbIqaIkwoW2mB@fedora/ [1]
> >> Signed-off-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>
> >> ---
> >>  mm/slub.c | 63 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++----
> >>  1 file changed, 59 insertions(+), 4 deletions(-)
> >> 
> >> diff --git a/mm/slub.c b/mm/slub.c
> >> index 609a183f8533..d8496b37e364 100644
> >> --- a/mm/slub.c
> >> +++ b/mm/slub.c
> > [...]
> >>  
> >>  	/*
> >> @@ -7597,7 +7648,7 @@ static int init_kmem_cache_nodes(struct kmem_cache *s)
> >>  	if (slab_state == DOWN || !cache_has_sheaves(s))
> >>  		return 1;
> >>  
> >> -	for_each_node_mask(node, slab_nodes) {
> >> +	for_each_node_mask(node, slab_barn_nodes) {
> >>  		struct node_barn *barn;
> >>  
> >>  		barn = kmalloc_node(sizeof(*barn), GFP_KERNEL, node);
> >> @@ -8250,6 +8301,7 @@ static int slab_mem_going_online_callback(int nid)
> >>  	 * and barn initialized for the new node.
> >>  	 */
> >>  	node_set(nid, slab_nodes);
> >> +	node_set(nid, slab_barn_nodes);
> > 
> > I had a somewhat related question here.
> > 
> > During memory hotplug, we call node_set() on slab_nodes when memory is brought
> > online, but we do not seem to call node_clear() when memory is taken offline. I
> > was wondering what the reasoning behind this is.
> 
> Probably nobody took the task the implement the necessary teardown.
> 
> > That also made me wonder about a related case. If I am understanding this
> > correctly, even if all memory of a node has been offlined, slab_nodes would
> > still make it appear that the node has memory, even though in reality it no
> > longer does. If so, then in patch 3, the condition
> > "if (unlikely(!node_isset(numa_node, slab_nodes)))" in can_free_to_pcs() seems
> > would cause the object free path to skip sheaves.
> 
> Maybe the condition should be looking at N_MEMORY then?

Yes, that's what I was thinking too.
I feel that, at least for the current patchset, this is probably a reasonable
approach.

> 
> Also ideally we should be using N_NORMAL_MEMORY everywhere for slab_nodes.
> Oh we actually did, but give that up in commit 1bf47d4195e45.

Thanks, I hadn't realized that node_clear had actually existed before.

> 
> Note in practice full memory offline of a node can only be achieved if it
> was all ZONE_MOVABLE and thus no slab allocations ever happened on it. But
> if it has only movable memory, it's practically memoryless for slab
> purposes.

That's a good point! I just realized that too.

> Maybe the condition should be looking at N_NORMAL_MEMORY then.
> That would cover the case when it became offline and also the case when it's
> online but with only movable memory?

Exactly, conceptually, N_NORMAL_MEMORY seems more precise than N_MEMORY. I took
a quick look through the code, though, and it seems that N_NORMAL_MEMORY hasn't
been fully handled in the hotplug code.

Given that, I think it makes sense to use N_MEMORY for now, and then switch to
N_NORMAL_MEMORY later once the handling there is improved.

> 
> I don't know if with CONFIG_HAVE_MEMORYLESS_NODES it's possible that
> numa_mem_id() (the closest node with memory) would be ZONE_MOVABLE only.
> Maybe let's hope not, and not adjust that part?
> 

I think that, in the CONFIG_HAVE_MEMORYLESS_NODES=y case, numa_mem_id() ends up
calling local_memory_node(), and the NUMA node it returns should be one that
can allocate slab memory. So the slab_node == numa_node check seems reasonable
to me.

So it seems that the issue being discussed here may only be specific to the
CONFIG_HAVE_MEMORYLESS_NODES=n case.

-- 
Thanks,
Hao


  reply	other threads:[~2026-03-19  7:01 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-11  8:25 [PATCH 0/3] slab: support memoryless nodes with sheaves Vlastimil Babka (SUSE)
2026-03-11  8:25 ` [PATCH 1/3] slab: decouple pointer to barn from kmem_cache_node Vlastimil Babka (SUSE)
2026-03-13  9:27   ` Harry Yoo
2026-03-13  9:46     ` Vlastimil Babka (SUSE)
2026-03-13 11:48       ` Harry Yoo
2026-03-16 13:19         ` Vlastimil Babka (SUSE)
2026-03-11  8:25 ` [PATCH 2/3] slab: create barns for online memoryless nodes Vlastimil Babka (SUSE)
2026-03-16  3:25   ` Harry Yoo
2026-03-18  9:27   ` Hao Li
2026-03-18 12:11     ` Vlastimil Babka (SUSE)
2026-03-19  7:01       ` Hao Li [this message]
2026-03-19  9:56         ` Vlastimil Babka (SUSE)
2026-03-19 11:27           ` Hao Li
2026-03-19 12:25             ` Vlastimil Babka (SUSE)
2026-03-11  8:25 ` [PATCH 3/3] slab: free remote objects to sheaves on " Vlastimil Babka (SUSE)
2026-03-16  3:48   ` Harry Yoo
2026-03-11  9:49 ` [PATCH 0/3] slab: support memoryless nodes with sheaves Ming Lei
2026-03-11 17:22   ` Vlastimil Babka (SUSE)
2026-04-08 13:04     ` Jon Hunter
2026-04-08 14:06       ` Hao Li
2026-04-08 14:31       ` Harry Yoo (Oracle)
2026-03-16 13:33 ` Vlastimil Babka (SUSE)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=jwlqiygxlch5gq6g3vkwynrpndjwcoxwice2hmvim3tvdfubp4@je5ldr7pa5f6 \
    --to=hao.li@linux.dev \
    --cc=akpm@linux-foundation.org \
    --cc=cl@gentwo.org \
    --cc=harry.yoo@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=ming.lei@redhat.com \
    --cc=rientjes@google.com \
    --cc=roman.gushchin@linux.dev \
    --cc=vbabka@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox