From: Mateusz Guzik <mjguzik@gmail.com>
To: Linus Walleij <linus.walleij@linaro.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
linux-mm@kvack.org, Pasha Tatashin <pasha.tatashin@soleen.com>
Subject: Re: [PATCH v2 0/5] fork: Page operation cleanups in the fork code
Date: Fri, 9 May 2025 13:16:40 +0200 [thread overview]
Message-ID: <CAGudoHFvVs4AsoHmR6DwNxLYu=C-bQ=ph=5zYQ=tB32fySEDFA@mail.gmail.com> (raw)
In-Reply-To: <CACRpkdYGOat_N5gU2g--cdJJFsHE3JuL0HpJqmFkvHbG9dcJwQ@mail.gmail.com>
On Fri, May 9, 2025 at 8:58 AM Linus Walleij <linus.walleij@linaro.org> wrote:
> And when the allocated stacks are reused, as you say the NUMA node
> is completely ignored.
>
> The rationale is in commit ac496bf48d97f2503eaa353996a4dd5e4383eaf0.
>
> I wonder what would be the best way to make this better? Improve
> the cache or assume that the MM/VMM itself should be able to sort this
> out in an optimized way by now and put it to the test by simply
> deleting the cache and see what happens?
>
The cache can be trivially touched up to reduce the problem:
scope_guard(preempt) around allocation and free from the cache and
check that the requested domain (including no domain) matches. There
is some already existing code to check which domains back pages on
free.
The real problem, expanding beyond just stack allocation, is that at
some point you are going to get a process initialized on one domain
but only ever running at another.
I was speculating this could be mostly fixed by asking the scheduler
in which domain does it think the process will run and then do all the
allocations with that domain as the requested target. Sorting this out
is way beyond the scope of the immediate problem though.
That aside the current caching mechanism is quite toy-ish, but I don't
have time right now to implement a RFC with something better.
--
Mateusz Guzik <mjguzik gmail.com>
prev parent reply other threads:[~2025-05-09 11:16 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-05-07 12:46 Linus Walleij
2025-05-07 12:46 ` [PATCH v2 1/5] fork: Clean-up ifdef logic around stack allocation Linus Walleij
2025-05-13 6:29 ` Mike Rapoport
2025-05-07 12:46 ` [PATCH v2 2/5] fork: Clean-up naming of vm_stack/vm_struct variables in vmap stacks code Linus Walleij
2025-05-13 6:33 ` Mike Rapoport
2025-05-07 12:46 ` [PATCH v2 3/5] fork: Remove assumption that vm_area->nr_pages equals to THREAD_SIZE Linus Walleij
2025-05-07 16:56 ` Mateusz Guzik
2025-05-09 5:44 ` Linus Walleij
2025-05-07 12:46 ` [PATCH v2 4/5] fork: check charging success before zeroing stack Linus Walleij
2025-05-13 7:38 ` Mike Rapoport
2025-05-07 12:46 ` [PATCH v2 5/5] fork: zero vmap stack using clear_page() instead of memset() Linus Walleij
2025-05-07 16:51 ` Mateusz Guzik
2025-05-09 5:49 ` Linus Walleij
2025-05-07 17:03 ` [PATCH v2 0/5] fork: Page operation cleanups in the fork code Mateusz Guzik
2025-05-09 6:57 ` Linus Walleij
2025-05-09 11:16 ` Mateusz Guzik [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAGudoHFvVs4AsoHmR6DwNxLYu=C-bQ=ph=5zYQ=tB32fySEDFA@mail.gmail.com' \
--to=mjguzik@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=linus.walleij@linaro.org \
--cc=linux-mm@kvack.org \
--cc=pasha.tatashin@soleen.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox