From: "David Hildenbrand (Red Hat)" <david@kernel.org>
To: Jinjiang Tu <tujinjiang@huawei.com>,
Andrew Morton <akpm@linux-foundation.org>
Cc: akpm@linuxfoundation.org, ziy@nvidia.com,
matthew.brost@intel.com, joshua.hahnjy@gmail.com,
rakie.kim@sk.com, byungchul@sk.com, gourry@gourry.net,
ying.huang@linux.alibaba.com, apopple@nvidia.com,
mgorman@suse.de, linux-mm@kvack.org, wangkefeng.wang@huawei.com
Subject: Re: [PATCH v3] mm/mempolicy: fix mpol_rebind_nodemask() for MPOL_F_NUMA_BALANCING
Date: Sun, 18 Jan 2026 19:45:11 +0100 [thread overview]
Message-ID: <87e0523c-1fc6-42aa-8159-150fd94d5b62@kernel.org> (raw)
In-Reply-To: <7471b637-537c-40db-ade0-ad373d7085f7@huawei.com>
On 1/17/26 02:00, Jinjiang Tu wrote:
>
> 在 2026/1/16 18:58, David Hildenbrand (Red Hat) 写道:
>> On 1/16/26 07:43, Jinjiang Tu wrote:
>>>
>>> 在 2026/1/16 2:12, Andrew Morton 写道:
>>>> On Thu, 15 Jan 2026 18:10:51 +0100 "David Hildenbrand (Red Hat)"
>>>> <david@kernel.org> wrote:
>>>>
>>>>> On 12/23/25 12:05, Jinjiang Tu wrote:
>>>>>> commit bda420b98505 ("numa balancing: migrate on fault among multiple
>>>>>> bound nodes") adds new flag MPOL_F_NUMA_BALANCING to enable NUMA
>>>>>> balancing
>>>>>> for MPOL_BIND memory policy.
>>>>>>
>>>>>> When the cpuset of tasks changes, the mempolicy of the task is
>>>>>> rebound by
>>>>>> mpol_rebind_nodemask(). When MPOL_F_STATIC_NODES and
>>>>>> MPOL_F_RELATIVE_NODES
>>>>>> are both not set, the behaviour of rebinding should be same whenever
>>>>>> MPOL_F_NUMA_BALANCING is set or not. So, when an application calls
>>>>>> set_mempolicy() with MPOL_F_NUMA_BALANCING set but both
>>>>>> MPOL_F_STATIC_NODES
>>>>>> and MPOL_F_RELATIVE_NODES cleared, mempolicy.w.cpuset_mems_allowed
>>>>>> should
>>>>>> be set to cpuset_current_mems_allowed nodemask. However, in current
>>>>>> implementation, mpol_store_user_nodemask() wrongly returns true,
>>>>>> causing
>>>>>> mempolicy->w.user_nodemask to be incorrectly set to the
>>>>>> user-specified
>>>>>> nodemask. Later, when the cpuset of the application changes,
>>>>>> mpol_rebind_nodemask() ends up rebinding based on the user-specified
>>>>>> nodemask rather than the cpuset_mems_allowed nodemask as intended.
>>>>>>
>>>>>> To fix this, only set mempolicy->w.user_nodemask to the
>>>>>> user-specified
>>>>>> nodemask if MPOL_F_STATIC_NODES or MPOL_F_RELATIVE_NODES is present.
>>>>>>
>>>>> ...
>>>>>
>>>>> I glimpsed over it and I think this is the right fix, thanks!
>>>>>
>>>>> Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
>>>> Cool. I decided this was "not for backporting", but the description of
>>>> the userspace-visible runtime effects isn't very clear. Jinjiang, can
>>>> you please advise?
>>>
>>> I agree don't backport this patch. Users can only see tasks binding to
>>> wrong NUMA after it's cpuset changes.
>>>
>>> Assuming there are 4 NUMA. task is binding to NUMA1 and it is in root
>>> cpuset.
>>> Move the task to a cpuset whose cpuset.mems.effective is 0-1. The
>>> task should
>>> still be binded to NUMA1, but is binded to NUMA0 wrongly.
>>
>> Do you think it's easy to write a reproducer to be run in a simple
>> QEMU VM with 4 nodes?
>
> I can reproduce with the following steps:
>
> 1. echo '+cpuset' > /sys/fs/cgroup/cgroup.subtree_control
> 2. mkdir /sys/fs/cgroup/test
> 3. ./reproducer &
> 4. cat /proc/$pid/numa_maps, the task is bound to NUMA 1
> 5. echo $pid > /sys/fs/cgroup/test/cgroup.procs
> 6. cat /proc/$pid/numa_maps, the task is bound to NUMA 0 now.
>
> The reproducer code:
>
> int main()
> {
> struct bitmask *bmp;
> int ret;
>
> bmp = numa_parse_nodestring("1");
> ret = set_mempolicy(MPOL_BIND | MPOL_F_NUMA_BALANCING, bmp->maskp, bmp->size + 1);
> if (ret < 0) {
> perror("Failed to call set_mempolicy");
> exit(-1);
> }
>
> while (1);
> return 0;
> }
>
> If I call set_mempolicy() without MPOL_F_NUMA_BALANCING. After step 5, the task is still bound to NUMA 1.
>
Great, can you incorporate that into an updated patch description?
And it might make sense to point at commit bda420b98505 ("numa
balancing: migrate on fault among multiple bound nodes") where we document
"
we add MPOL_F_NUMA_BALANCING mode flag to
set_mempolicy() when mode is MPOL_BIND. With the flag specified, NUMA
balancing will be enabled within the thread to optimize the page
placement within the constrains of the specified memory binding policy. "
The "within the constrains" is the crucial bit here.
--
Cheers
David
next prev parent reply other threads:[~2026-01-18 18:45 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-12-23 11:05 Jinjiang Tu
2026-01-13 1:52 ` Jinjiang Tu
2026-01-14 0:37 ` Andrew Morton
2026-01-14 1:23 ` Jinjiang Tu
2026-01-15 17:10 ` David Hildenbrand (Red Hat)
2026-01-15 18:12 ` Andrew Morton
2026-01-16 6:43 ` Jinjiang Tu
2026-01-16 10:58 ` David Hildenbrand (Red Hat)
2026-01-17 1:00 ` Jinjiang Tu
2026-01-18 18:45 ` David Hildenbrand (Red Hat) [this message]
2026-01-19 11:46 ` Jinjiang Tu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87e0523c-1fc6-42aa-8159-150fd94d5b62@kernel.org \
--to=david@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=akpm@linuxfoundation.org \
--cc=apopple@nvidia.com \
--cc=byungchul@sk.com \
--cc=gourry@gourry.net \
--cc=joshua.hahnjy@gmail.com \
--cc=linux-mm@kvack.org \
--cc=matthew.brost@intel.com \
--cc=mgorman@suse.de \
--cc=rakie.kim@sk.com \
--cc=tujinjiang@huawei.com \
--cc=wangkefeng.wang@huawei.com \
--cc=ying.huang@linux.alibaba.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox