linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "David Hildenbrand (Red Hat)" <david@kernel.org>
To: Swaraj Gaikwad <swarajgaikwad1925@gmail.com>,
	Muchun Song <muchun.song@linux.dev>,
	Oscar Salvador <osalvador@suse.de>,
	David Hildenbrand <david@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	"open list:HUGETLB SUBSYSTEM" <linux-mm@kvack.org>,
	open list <linux-kernel@vger.kernel.org>
Cc: skhan@linuxfoundation.org, david.hunter.linux@gmail.com
Subject: Re: [RFC] hugetlb: add memory-hotplug notifier to only allocate for online nodes
Date: Thu, 6 Nov 2025 11:01:19 +0100	[thread overview]
Message-ID: <5c7c607e-079e-4650-be8d-6a1210730b57@kernel.org> (raw)
In-Reply-To: <20251106085645.13607-1-swarajgaikwad1925@gmail.com>

On 06.11.25 09:56, Swaraj Gaikwad wrote:
> This patch is a RFC on a proposed change to the hugetlb cgroup subsystem’s
> css allocation function.
> 
> The existing hugetlb_cgroup_css_alloc() uses for_each_node() to allocate
> nodeinfo for all nodes, including those which are not online yet
> (or never will be). This can waste considerable memory on large-node systems.
> The documentation already lists this as a TODO.

We're talking about the

kzalloc_node(sizeof(struct hugetlb_cgroup_per_node), GFP_KERNEL, node_to_alloc);

$ pahole mm/hugetlb_cgroup.o

struct hugetlb_cgroup_per_node {
         long unsigned int          usage[2];             /*     0    16 */

         /* size: 16, cachelines: 1, members: 1 */
         /* last cacheline: 16 bytes */
};

16 bytes on x86_64. So nobody should care here.

Of course, it depends on HUGE_MAX_HSTATE.

IIRC only HUGE_MAX_HSTATE goes crazy on that with effectively 15 entries.

15*8 ~128 bytes.

So with 1024 nodes we would be allocating 128 KiB.


And given that this is for each cgroup (right?) I assume it can add up.

> 
> Proposed Change:
>      Introduce a memory hotplug notifier that listens for MEM_ONLINE
>      events. When a node becomes online, we call the same allocation function
>      but insted of for_each_node(),using for_each_online_node(). This means
>      memory is only allocated for nodes which are online, thus reducing waste.

We have a NODE_ADDING_FIRST_MEMORY now, I'd assume that is more suitable?

> 
> Feedback Requested:
>      - Where in the codebase (which file or section) is it most appropriate to
>        implement and register the memory hotplug notifier for this subsystem?

I'd assume you would have to register in hugetlb_cgroup_css_alloc() and
free in hugetlb_cgroup_css_free().

>      - Are there best practices or patterns for handling the notifier lifecycle,
>        especially for unregistering during cgroup or subsystem teardown?

Not that I can think of some :)

>      - What are the standard methods or tools to test memory hotplug scenarios
>        for cgroups? Are there ways to reliably trigger node online/offline events
>        in a development environment?

You can use QEMU to hotplug memory (pc-dimm device) to a CPU+memory-less node and
to then remove it again. If you disable automatic memory onlining, you should be able to
trigger this multiple times without any issues.

>      - Are there existing test cases or utilities in the kernel tree that would help
>        to verify correct behavior of this change?

Don't think so.

>      - Any suggestions for implementation improvements or cleaner API usage?

I'd assume you'd want to look into NODE_ADDING_FIRST_MEMORY.

-- 
Cheers

David


      reply	other threads:[~2025-11-06 10:01 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-06  8:56 Swaraj Gaikwad
2025-11-06 10:01 ` David Hildenbrand (Red Hat) [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5c7c607e-079e-4650-be8d-6a1210730b57@kernel.org \
    --to=david@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=david.hunter.linux@gmail.com \
    --cc=david@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=muchun.song@linux.dev \
    --cc=osalvador@suse.de \
    --cc=skhan@linuxfoundation.org \
    --cc=swarajgaikwad1925@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox