* [PATCH v3] mm/mempolicy: fix mpol_rebind_nodemask() for MPOL_F_NUMA_BALANCING
@ 2025-12-23 11:05 Jinjiang Tu
2026-01-13 1:52 ` Jinjiang Tu
2026-01-15 17:10 ` David Hildenbrand (Red Hat)
0 siblings, 2 replies; 11+ messages in thread
From: Jinjiang Tu @ 2025-12-23 11:05 UTC (permalink / raw)
To: akpm, david, ziy, matthew.brost, joshua.hahnjy, rakie.kim,
byungchul, gourry, ying.huang, apopple, mgorman, linux-mm
Cc: wangkefeng.wang, tujinjiang
commit bda420b98505 ("numa balancing: migrate on fault among multiple
bound nodes") adds new flag MPOL_F_NUMA_BALANCING to enable NUMA balancing
for MPOL_BIND memory policy.
When the cpuset of tasks changes, the mempolicy of the task is rebound by
mpol_rebind_nodemask(). When MPOL_F_STATIC_NODES and MPOL_F_RELATIVE_NODES
are both not set, the behaviour of rebinding should be same whenever
MPOL_F_NUMA_BALANCING is set or not. So, when an application calls
set_mempolicy() with MPOL_F_NUMA_BALANCING set but both MPOL_F_STATIC_NODES
and MPOL_F_RELATIVE_NODES cleared, mempolicy.w.cpuset_mems_allowed should
be set to cpuset_current_mems_allowed nodemask. However, in current
implementation, mpol_store_user_nodemask() wrongly returns true, causing
mempolicy->w.user_nodemask to be incorrectly set to the user-specified
nodemask. Later, when the cpuset of the application changes,
mpol_rebind_nodemask() ends up rebinding based on the user-specified
nodemask rather than the cpuset_mems_allowed nodemask as intended.
To fix this, only set mempolicy->w.user_nodemask to the user-specified
nodemask if MPOL_F_STATIC_NODES or MPOL_F_RELATIVE_NODES is present.
Fixes: bda420b98505 ("numa balancing: migrate on fault among multiple bound nodes")
Reviewed-by: Gregory Price <gourry@gourry.net>
Reviewed-by: Huang Ying <ying.huang@linux.alibaba.com>
Signed-off-by: Jinjiang Tu <tujinjiang@huawei.com>
---
Change in v3:
* update changelog
* collect RB from Huang Ying
include/uapi/linux/mempolicy.h | 3 +++
mm/mempolicy.c | 2 +-
2 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/include/uapi/linux/mempolicy.h b/include/uapi/linux/mempolicy.h
index 8fbbe613611a..6c962d866e86 100644
--- a/include/uapi/linux/mempolicy.h
+++ b/include/uapi/linux/mempolicy.h
@@ -39,6 +39,9 @@ enum {
#define MPOL_MODE_FLAGS \
(MPOL_F_STATIC_NODES | MPOL_F_RELATIVE_NODES | MPOL_F_NUMA_BALANCING)
+/* Whether the nodemask is specified by users */
+#define MPOL_USER_NODEMASK_FLAGS (MPOL_F_STATIC_NODES | MPOL_F_RELATIVE_NODES)
+
/* Flags for get_mempolicy */
#define MPOL_F_NODE (1<<0) /* return next IL mode instead of node mask */
#define MPOL_F_ADDR (1<<1) /* look up vma using address */
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 68a98ba57882..76da50425712 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -365,7 +365,7 @@ static const struct mempolicy_operations {
static inline int mpol_store_user_nodemask(const struct mempolicy *pol)
{
- return pol->flags & MPOL_MODE_FLAGS;
+ return pol->flags & MPOL_USER_NODEMASK_FLAGS;
}
static void mpol_relative_nodemask(nodemask_t *ret, const nodemask_t *orig,
--
2.43.0
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3] mm/mempolicy: fix mpol_rebind_nodemask() for MPOL_F_NUMA_BALANCING
2025-12-23 11:05 [PATCH v3] mm/mempolicy: fix mpol_rebind_nodemask() for MPOL_F_NUMA_BALANCING Jinjiang Tu
@ 2026-01-13 1:52 ` Jinjiang Tu
2026-01-14 0:37 ` Andrew Morton
2026-01-15 17:10 ` David Hildenbrand (Red Hat)
1 sibling, 1 reply; 11+ messages in thread
From: Jinjiang Tu @ 2026-01-13 1:52 UTC (permalink / raw)
To: akpm, david, ziy, matthew.brost, joshua.hahnjy, rakie.kim,
byungchul, gourry, ying.huang, apopple, mgorman, linux-mm
Cc: wangkefeng.wang
在 2025/12/23 19:05, Jinjiang Tu 写道:
> commit bda420b98505 ("numa balancing: migrate on fault among multiple
> bound nodes") adds new flag MPOL_F_NUMA_BALANCING to enable NUMA balancing
> for MPOL_BIND memory policy.
>
> When the cpuset of tasks changes, the mempolicy of the task is rebound by
> mpol_rebind_nodemask(). When MPOL_F_STATIC_NODES and MPOL_F_RELATIVE_NODES
> are both not set, the behaviour of rebinding should be same whenever
> MPOL_F_NUMA_BALANCING is set or not. So, when an application calls
> set_mempolicy() with MPOL_F_NUMA_BALANCING set but both MPOL_F_STATIC_NODES
> and MPOL_F_RELATIVE_NODES cleared, mempolicy.w.cpuset_mems_allowed should
> be set to cpuset_current_mems_allowed nodemask. However, in current
> implementation, mpol_store_user_nodemask() wrongly returns true, causing
> mempolicy->w.user_nodemask to be incorrectly set to the user-specified
> nodemask. Later, when the cpuset of the application changes,
> mpol_rebind_nodemask() ends up rebinding based on the user-specified
> nodemask rather than the cpuset_mems_allowed nodemask as intended.
>
> To fix this, only set mempolicy->w.user_nodemask to the user-specified
> nodemask if MPOL_F_STATIC_NODES or MPOL_F_RELATIVE_NODES is present.
>
> Fixes: bda420b98505 ("numa balancing: migrate on fault among multiple bound nodes")
> Reviewed-by: Gregory Price <gourry@gourry.net>
> Reviewed-by: Huang Ying <ying.huang@linux.alibaba.com>
> Signed-off-by: Jinjiang Tu <tujinjiang@huawei.com>
> ---
> Change in v3:
> * update changelog
> * collect RB from Huang Ying
Hi, Andrew
This patch has been reviewed, could you queue this patch into mm branch?
> include/uapi/linux/mempolicy.h | 3 +++
> mm/mempolicy.c | 2 +-
> 2 files changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/include/uapi/linux/mempolicy.h b/include/uapi/linux/mempolicy.h
> index 8fbbe613611a..6c962d866e86 100644
> --- a/include/uapi/linux/mempolicy.h
> +++ b/include/uapi/linux/mempolicy.h
> @@ -39,6 +39,9 @@ enum {
> #define MPOL_MODE_FLAGS \
> (MPOL_F_STATIC_NODES | MPOL_F_RELATIVE_NODES | MPOL_F_NUMA_BALANCING)
>
> +/* Whether the nodemask is specified by users */
> +#define MPOL_USER_NODEMASK_FLAGS (MPOL_F_STATIC_NODES | MPOL_F_RELATIVE_NODES)
> +
> /* Flags for get_mempolicy */
> #define MPOL_F_NODE (1<<0) /* return next IL mode instead of node mask */
> #define MPOL_F_ADDR (1<<1) /* look up vma using address */
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index 68a98ba57882..76da50425712 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -365,7 +365,7 @@ static const struct mempolicy_operations {
>
> static inline int mpol_store_user_nodemask(const struct mempolicy *pol)
> {
> - return pol->flags & MPOL_MODE_FLAGS;
> + return pol->flags & MPOL_USER_NODEMASK_FLAGS;
> }
>
> static void mpol_relative_nodemask(nodemask_t *ret, const nodemask_t *orig,
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3] mm/mempolicy: fix mpol_rebind_nodemask() for MPOL_F_NUMA_BALANCING
2026-01-13 1:52 ` Jinjiang Tu
@ 2026-01-14 0:37 ` Andrew Morton
2026-01-14 1:23 ` Jinjiang Tu
0 siblings, 1 reply; 11+ messages in thread
From: Andrew Morton @ 2026-01-14 0:37 UTC (permalink / raw)
To: Jinjiang Tu
Cc: akpm, david, ziy, matthew.brost, joshua.hahnjy, rakie.kim,
byungchul, gourry, ying.huang, apopple, mgorman, linux-mm,
wangkefeng.wang
On Tue, 13 Jan 2026 09:52:42 +0800 Jinjiang Tu <tujinjiang@huawei.com> wrote:
>
> 在 2025/12/23 19:05, Jinjiang Tu 写道:
> > commit bda420b98505 ("numa balancing: migrate on fault among multiple
> > bound nodes") adds new flag MPOL_F_NUMA_BALANCING to enable NUMA balancing
> > for MPOL_BIND memory policy.
> >
> > When the cpuset of tasks changes, the mempolicy of the task is rebound by
> > mpol_rebind_nodemask(). When MPOL_F_STATIC_NODES and MPOL_F_RELATIVE_NODES
> > are both not set, the behaviour of rebinding should be same whenever
> > MPOL_F_NUMA_BALANCING is set or not. So, when an application calls
> > set_mempolicy() with MPOL_F_NUMA_BALANCING set but both MPOL_F_STATIC_NODES
> > and MPOL_F_RELATIVE_NODES cleared, mempolicy.w.cpuset_mems_allowed should
> > be set to cpuset_current_mems_allowed nodemask. However, in current
> > implementation, mpol_store_user_nodemask() wrongly returns true, causing
> > mempolicy->w.user_nodemask to be incorrectly set to the user-specified
> > nodemask. Later, when the cpuset of the application changes,
> > mpol_rebind_nodemask() ends up rebinding based on the user-specified
> > nodemask rather than the cpuset_mems_allowed nodemask as intended.
> >
> > To fix this, only set mempolicy->w.user_nodemask to the user-specified
> > nodemask if MPOL_F_STATIC_NODES or MPOL_F_RELATIVE_NODES is present.
> >
> > Fixes: bda420b98505 ("numa balancing: migrate on fault among multiple bound nodes")
> > Reviewed-by: Gregory Price <gourry@gourry.net>
> > Reviewed-by: Huang Ying <ying.huang@linux.alibaba.com>
> > Signed-off-by: Jinjiang Tu <tujinjiang@huawei.com>
> > ---
> > Change in v3:
> > * update changelog
> > * collect RB from Huang Ying
>
> Hi, Andrew
>
> This patch has been reviewed, could you queue this patch into mm branch?
It has been in mm.git since Dec 23 ;)
The changelog led me to believe that earlier (-stable) kernels don't
need this fix. Maybe that was wrong?
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3] mm/mempolicy: fix mpol_rebind_nodemask() for MPOL_F_NUMA_BALANCING
2026-01-14 0:37 ` Andrew Morton
@ 2026-01-14 1:23 ` Jinjiang Tu
0 siblings, 0 replies; 11+ messages in thread
From: Jinjiang Tu @ 2026-01-14 1:23 UTC (permalink / raw)
To: Andrew Morton
Cc: akpm, david, ziy, matthew.brost, joshua.hahnjy, rakie.kim,
byungchul, gourry, ying.huang, apopple, mgorman, linux-mm,
wangkefeng.wang
在 2026/1/14 8:37, Andrew Morton 写道:
> On Tue, 13 Jan 2026 09:52:42 +0800 Jinjiang Tu <tujinjiang@huawei.com> wrote:
>
>> 在 2025/12/23 19:05, Jinjiang Tu 写道:
>>> commit bda420b98505 ("numa balancing: migrate on fault among multiple
>>> bound nodes") adds new flag MPOL_F_NUMA_BALANCING to enable NUMA balancing
>>> for MPOL_BIND memory policy.
>>>
>>> When the cpuset of tasks changes, the mempolicy of the task is rebound by
>>> mpol_rebind_nodemask(). When MPOL_F_STATIC_NODES and MPOL_F_RELATIVE_NODES
>>> are both not set, the behaviour of rebinding should be same whenever
>>> MPOL_F_NUMA_BALANCING is set or not. So, when an application calls
>>> set_mempolicy() with MPOL_F_NUMA_BALANCING set but both MPOL_F_STATIC_NODES
>>> and MPOL_F_RELATIVE_NODES cleared, mempolicy.w.cpuset_mems_allowed should
>>> be set to cpuset_current_mems_allowed nodemask. However, in current
>>> implementation, mpol_store_user_nodemask() wrongly returns true, causing
>>> mempolicy->w.user_nodemask to be incorrectly set to the user-specified
>>> nodemask. Later, when the cpuset of the application changes,
>>> mpol_rebind_nodemask() ends up rebinding based on the user-specified
>>> nodemask rather than the cpuset_mems_allowed nodemask as intended.
>>>
>>> To fix this, only set mempolicy->w.user_nodemask to the user-specified
>>> nodemask if MPOL_F_STATIC_NODES or MPOL_F_RELATIVE_NODES is present.
>>>
>>> Fixes: bda420b98505 ("numa balancing: migrate on fault among multiple bound nodes")
>>> Reviewed-by: Gregory Price <gourry@gourry.net>
>>> Reviewed-by: Huang Ying <ying.huang@linux.alibaba.com>
>>> Signed-off-by: Jinjiang Tu <tujinjiang@huawei.com>
>>> ---
>>> Change in v3:
>>> * update changelog
>>> * collect RB from Huang Ying
>> Hi, Andrew
>>
>> This patch has been reviewed, could you queue this patch into mm branch?
> It has been in mm.git since Dec 23 ;)
Indeed, I missed the email. Thanks.
>
> The changelog led me to believe that earlier (-stable) kernels don't
> need this fix. Maybe that was wrong?
Yes. This only fixes a minor issue.
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3] mm/mempolicy: fix mpol_rebind_nodemask() for MPOL_F_NUMA_BALANCING
2025-12-23 11:05 [PATCH v3] mm/mempolicy: fix mpol_rebind_nodemask() for MPOL_F_NUMA_BALANCING Jinjiang Tu
2026-01-13 1:52 ` Jinjiang Tu
@ 2026-01-15 17:10 ` David Hildenbrand (Red Hat)
2026-01-15 18:12 ` Andrew Morton
1 sibling, 1 reply; 11+ messages in thread
From: David Hildenbrand (Red Hat) @ 2026-01-15 17:10 UTC (permalink / raw)
To: Jinjiang Tu, akpm, ziy, matthew.brost, joshua.hahnjy, rakie.kim,
byungchul, gourry, ying.huang, apopple, mgorman, linux-mm
Cc: wangkefeng.wang
On 12/23/25 12:05, Jinjiang Tu wrote:
> commit bda420b98505 ("numa balancing: migrate on fault among multiple
> bound nodes") adds new flag MPOL_F_NUMA_BALANCING to enable NUMA balancing
> for MPOL_BIND memory policy.
>
> When the cpuset of tasks changes, the mempolicy of the task is rebound by
> mpol_rebind_nodemask(). When MPOL_F_STATIC_NODES and MPOL_F_RELATIVE_NODES
> are both not set, the behaviour of rebinding should be same whenever
> MPOL_F_NUMA_BALANCING is set or not. So, when an application calls
> set_mempolicy() with MPOL_F_NUMA_BALANCING set but both MPOL_F_STATIC_NODES
> and MPOL_F_RELATIVE_NODES cleared, mempolicy.w.cpuset_mems_allowed should
> be set to cpuset_current_mems_allowed nodemask. However, in current
> implementation, mpol_store_user_nodemask() wrongly returns true, causing
> mempolicy->w.user_nodemask to be incorrectly set to the user-specified
> nodemask. Later, when the cpuset of the application changes,
> mpol_rebind_nodemask() ends up rebinding based on the user-specified
> nodemask rather than the cpuset_mems_allowed nodemask as intended.
>
> To fix this, only set mempolicy->w.user_nodemask to the user-specified
> nodemask if MPOL_F_STATIC_NODES or MPOL_F_RELATIVE_NODES is present.
>
> Fixes: bda420b98505 ("numa balancing: migrate on fault among multiple bound nodes")
> Reviewed-by: Gregory Price <gourry@gourry.net>
> Reviewed-by: Huang Ying <ying.huang@linux.alibaba.com>
> Signed-off-by: Jinjiang Tu <tujinjiang@huawei.com>
> ---
> Change in v3:
> * update changelog
> * collect RB from Huang Ying
>
> include/uapi/linux/mempolicy.h | 3 +++
> mm/mempolicy.c | 2 +-
> 2 files changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/include/uapi/linux/mempolicy.h b/include/uapi/linux/mempolicy.h
> index 8fbbe613611a..6c962d866e86 100644
> --- a/include/uapi/linux/mempolicy.h
> +++ b/include/uapi/linux/mempolicy.h
> @@ -39,6 +39,9 @@ enum {
> #define MPOL_MODE_FLAGS \
> (MPOL_F_STATIC_NODES | MPOL_F_RELATIVE_NODES | MPOL_F_NUMA_BALANCING)
>
> +/* Whether the nodemask is specified by users */
> +#define MPOL_USER_NODEMASK_FLAGS (MPOL_F_STATIC_NODES | MPOL_F_RELATIVE_NODES)
> +
> /* Flags for get_mempolicy */
> #define MPOL_F_NODE (1<<0) /* return next IL mode instead of node mask */
> #define MPOL_F_ADDR (1<<1) /* look up vma using address */
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index 68a98ba57882..76da50425712 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -365,7 +365,7 @@ static const struct mempolicy_operations {
>
> static inline int mpol_store_user_nodemask(const struct mempolicy *pol)
> {
> - return pol->flags & MPOL_MODE_FLAGS;
> + return pol->flags & MPOL_USER_NODEMASK_FLAGS;
> }
>
> static void mpol_relative_nodemask(nodemask_t *ret, const nodemask_t *orig,
I glimpsed over it and I think this is the right fix, thanks!
Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
--
Cheers
David
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3] mm/mempolicy: fix mpol_rebind_nodemask() for MPOL_F_NUMA_BALANCING
2026-01-15 17:10 ` David Hildenbrand (Red Hat)
@ 2026-01-15 18:12 ` Andrew Morton
2026-01-16 6:43 ` Jinjiang Tu
0 siblings, 1 reply; 11+ messages in thread
From: Andrew Morton @ 2026-01-15 18:12 UTC (permalink / raw)
To: David Hildenbrand (Red Hat)
Cc: Jinjiang Tu, akpm, ziy, matthew.brost, joshua.hahnjy, rakie.kim,
byungchul, gourry, ying.huang, apopple, mgorman, linux-mm,
wangkefeng.wang
On Thu, 15 Jan 2026 18:10:51 +0100 "David Hildenbrand (Red Hat)" <david@kernel.org> wrote:
> On 12/23/25 12:05, Jinjiang Tu wrote:
> > commit bda420b98505 ("numa balancing: migrate on fault among multiple
> > bound nodes") adds new flag MPOL_F_NUMA_BALANCING to enable NUMA balancing
> > for MPOL_BIND memory policy.
> >
> > When the cpuset of tasks changes, the mempolicy of the task is rebound by
> > mpol_rebind_nodemask(). When MPOL_F_STATIC_NODES and MPOL_F_RELATIVE_NODES
> > are both not set, the behaviour of rebinding should be same whenever
> > MPOL_F_NUMA_BALANCING is set or not. So, when an application calls
> > set_mempolicy() with MPOL_F_NUMA_BALANCING set but both MPOL_F_STATIC_NODES
> > and MPOL_F_RELATIVE_NODES cleared, mempolicy.w.cpuset_mems_allowed should
> > be set to cpuset_current_mems_allowed nodemask. However, in current
> > implementation, mpol_store_user_nodemask() wrongly returns true, causing
> > mempolicy->w.user_nodemask to be incorrectly set to the user-specified
> > nodemask. Later, when the cpuset of the application changes,
> > mpol_rebind_nodemask() ends up rebinding based on the user-specified
> > nodemask rather than the cpuset_mems_allowed nodemask as intended.
> >
> > To fix this, only set mempolicy->w.user_nodemask to the user-specified
> > nodemask if MPOL_F_STATIC_NODES or MPOL_F_RELATIVE_NODES is present.
> >
>
> ...
>
> I glimpsed over it and I think this is the right fix, thanks!
>
> Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
Cool. I decided this was "not for backporting", but the description of
the userspace-visible runtime effects isn't very clear. Jinjiang, can
you please advise?
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3] mm/mempolicy: fix mpol_rebind_nodemask() for MPOL_F_NUMA_BALANCING
2026-01-15 18:12 ` Andrew Morton
@ 2026-01-16 6:43 ` Jinjiang Tu
2026-01-16 10:58 ` David Hildenbrand (Red Hat)
0 siblings, 1 reply; 11+ messages in thread
From: Jinjiang Tu @ 2026-01-16 6:43 UTC (permalink / raw)
To: Andrew Morton, David Hildenbrand (Red Hat)
Cc: akpm, ziy, matthew.brost, joshua.hahnjy, rakie.kim, byungchul,
gourry, ying.huang, apopple, mgorman, linux-mm, wangkefeng.wang
在 2026/1/16 2:12, Andrew Morton 写道:
> On Thu, 15 Jan 2026 18:10:51 +0100 "David Hildenbrand (Red Hat)" <david@kernel.org> wrote:
>
>> On 12/23/25 12:05, Jinjiang Tu wrote:
>>> commit bda420b98505 ("numa balancing: migrate on fault among multiple
>>> bound nodes") adds new flag MPOL_F_NUMA_BALANCING to enable NUMA balancing
>>> for MPOL_BIND memory policy.
>>>
>>> When the cpuset of tasks changes, the mempolicy of the task is rebound by
>>> mpol_rebind_nodemask(). When MPOL_F_STATIC_NODES and MPOL_F_RELATIVE_NODES
>>> are both not set, the behaviour of rebinding should be same whenever
>>> MPOL_F_NUMA_BALANCING is set or not. So, when an application calls
>>> set_mempolicy() with MPOL_F_NUMA_BALANCING set but both MPOL_F_STATIC_NODES
>>> and MPOL_F_RELATIVE_NODES cleared, mempolicy.w.cpuset_mems_allowed should
>>> be set to cpuset_current_mems_allowed nodemask. However, in current
>>> implementation, mpol_store_user_nodemask() wrongly returns true, causing
>>> mempolicy->w.user_nodemask to be incorrectly set to the user-specified
>>> nodemask. Later, when the cpuset of the application changes,
>>> mpol_rebind_nodemask() ends up rebinding based on the user-specified
>>> nodemask rather than the cpuset_mems_allowed nodemask as intended.
>>>
>>> To fix this, only set mempolicy->w.user_nodemask to the user-specified
>>> nodemask if MPOL_F_STATIC_NODES or MPOL_F_RELATIVE_NODES is present.
>>>
>> ...
>>
>> I glimpsed over it and I think this is the right fix, thanks!
>>
>> Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
> Cool. I decided this was "not for backporting", but the description of
> the userspace-visible runtime effects isn't very clear. Jinjiang, can
> you please advise?
I agree don't backport this patch. Users can only see tasks binding to
wrong NUMA after it's cpuset changes.
Assuming there are 4 NUMA. task is binding to NUMA1 and it is in root cpuset.
Move the task to a cpuset whose cpuset.mems.effective is 0-1. The task should
still be binded to NUMA1, but is binded to NUMA0 wrongly.
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3] mm/mempolicy: fix mpol_rebind_nodemask() for MPOL_F_NUMA_BALANCING
2026-01-16 6:43 ` Jinjiang Tu
@ 2026-01-16 10:58 ` David Hildenbrand (Red Hat)
2026-01-17 1:00 ` Jinjiang Tu
0 siblings, 1 reply; 11+ messages in thread
From: David Hildenbrand (Red Hat) @ 2026-01-16 10:58 UTC (permalink / raw)
To: Jinjiang Tu, Andrew Morton
Cc: akpm, ziy, matthew.brost, joshua.hahnjy, rakie.kim, byungchul,
gourry, ying.huang, apopple, mgorman, linux-mm, wangkefeng.wang
On 1/16/26 07:43, Jinjiang Tu wrote:
>
> 在 2026/1/16 2:12, Andrew Morton 写道:
>> On Thu, 15 Jan 2026 18:10:51 +0100 "David Hildenbrand (Red Hat)" <david@kernel.org> wrote:
>>
>>> On 12/23/25 12:05, Jinjiang Tu wrote:
>>>> commit bda420b98505 ("numa balancing: migrate on fault among multiple
>>>> bound nodes") adds new flag MPOL_F_NUMA_BALANCING to enable NUMA balancing
>>>> for MPOL_BIND memory policy.
>>>>
>>>> When the cpuset of tasks changes, the mempolicy of the task is rebound by
>>>> mpol_rebind_nodemask(). When MPOL_F_STATIC_NODES and MPOL_F_RELATIVE_NODES
>>>> are both not set, the behaviour of rebinding should be same whenever
>>>> MPOL_F_NUMA_BALANCING is set or not. So, when an application calls
>>>> set_mempolicy() with MPOL_F_NUMA_BALANCING set but both MPOL_F_STATIC_NODES
>>>> and MPOL_F_RELATIVE_NODES cleared, mempolicy.w.cpuset_mems_allowed should
>>>> be set to cpuset_current_mems_allowed nodemask. However, in current
>>>> implementation, mpol_store_user_nodemask() wrongly returns true, causing
>>>> mempolicy->w.user_nodemask to be incorrectly set to the user-specified
>>>> nodemask. Later, when the cpuset of the application changes,
>>>> mpol_rebind_nodemask() ends up rebinding based on the user-specified
>>>> nodemask rather than the cpuset_mems_allowed nodemask as intended.
>>>>
>>>> To fix this, only set mempolicy->w.user_nodemask to the user-specified
>>>> nodemask if MPOL_F_STATIC_NODES or MPOL_F_RELATIVE_NODES is present.
>>>>
>>> ...
>>>
>>> I glimpsed over it and I think this is the right fix, thanks!
>>>
>>> Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
>> Cool. I decided this was "not for backporting", but the description of
>> the userspace-visible runtime effects isn't very clear. Jinjiang, can
>> you please advise?
>
> I agree don't backport this patch. Users can only see tasks binding to
> wrong NUMA after it's cpuset changes.
>
> Assuming there are 4 NUMA. task is binding to NUMA1 and it is in root cpuset.
> Move the task to a cpuset whose cpuset.mems.effective is 0-1. The task should
> still be binded to NUMA1, but is binded to NUMA0 wrongly.
Do you think it's easy to write a reproducer to be run in a simple QEMU
VM with 4 nodes?
--
Cheers
David
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3] mm/mempolicy: fix mpol_rebind_nodemask() for MPOL_F_NUMA_BALANCING
2026-01-16 10:58 ` David Hildenbrand (Red Hat)
@ 2026-01-17 1:00 ` Jinjiang Tu
2026-01-18 18:45 ` David Hildenbrand (Red Hat)
0 siblings, 1 reply; 11+ messages in thread
From: Jinjiang Tu @ 2026-01-17 1:00 UTC (permalink / raw)
To: David Hildenbrand (Red Hat), Andrew Morton
Cc: akpm, ziy, matthew.brost, joshua.hahnjy, rakie.kim, byungchul,
gourry, ying.huang, apopple, mgorman, linux-mm, wangkefeng.wang
在 2026/1/16 18:58, David Hildenbrand (Red Hat) 写道:
> On 1/16/26 07:43, Jinjiang Tu wrote:
>>
>> 在 2026/1/16 2:12, Andrew Morton 写道:
>>> On Thu, 15 Jan 2026 18:10:51 +0100 "David Hildenbrand (Red Hat)"
>>> <david@kernel.org> wrote:
>>>
>>>> On 12/23/25 12:05, Jinjiang Tu wrote:
>>>>> commit bda420b98505 ("numa balancing: migrate on fault among multiple
>>>>> bound nodes") adds new flag MPOL_F_NUMA_BALANCING to enable NUMA
>>>>> balancing
>>>>> for MPOL_BIND memory policy.
>>>>>
>>>>> When the cpuset of tasks changes, the mempolicy of the task is
>>>>> rebound by
>>>>> mpol_rebind_nodemask(). When MPOL_F_STATIC_NODES and
>>>>> MPOL_F_RELATIVE_NODES
>>>>> are both not set, the behaviour of rebinding should be same whenever
>>>>> MPOL_F_NUMA_BALANCING is set or not. So, when an application calls
>>>>> set_mempolicy() with MPOL_F_NUMA_BALANCING set but both
>>>>> MPOL_F_STATIC_NODES
>>>>> and MPOL_F_RELATIVE_NODES cleared, mempolicy.w.cpuset_mems_allowed
>>>>> should
>>>>> be set to cpuset_current_mems_allowed nodemask. However, in current
>>>>> implementation, mpol_store_user_nodemask() wrongly returns true,
>>>>> causing
>>>>> mempolicy->w.user_nodemask to be incorrectly set to the
>>>>> user-specified
>>>>> nodemask. Later, when the cpuset of the application changes,
>>>>> mpol_rebind_nodemask() ends up rebinding based on the user-specified
>>>>> nodemask rather than the cpuset_mems_allowed nodemask as intended.
>>>>>
>>>>> To fix this, only set mempolicy->w.user_nodemask to the
>>>>> user-specified
>>>>> nodemask if MPOL_F_STATIC_NODES or MPOL_F_RELATIVE_NODES is present.
>>>>>
>>>> ...
>>>>
>>>> I glimpsed over it and I think this is the right fix, thanks!
>>>>
>>>> Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
>>> Cool. I decided this was "not for backporting", but the description of
>>> the userspace-visible runtime effects isn't very clear. Jinjiang, can
>>> you please advise?
>>
>> I agree don't backport this patch. Users can only see tasks binding to
>> wrong NUMA after it's cpuset changes.
>>
>> Assuming there are 4 NUMA. task is binding to NUMA1 and it is in root
>> cpuset.
>> Move the task to a cpuset whose cpuset.mems.effective is 0-1. The
>> task should
>> still be binded to NUMA1, but is binded to NUMA0 wrongly.
>
> Do you think it's easy to write a reproducer to be run in a simple
> QEMU VM with 4 nodes?
I can reproduce with the following steps:
1. echo '+cpuset' > /sys/fs/cgroup/cgroup.subtree_control
2. mkdir /sys/fs/cgroup/test
3. ./reproducer &
4. cat /proc/$pid/numa_maps, the task is bound to NUMA 1
5. echo $pid > /sys/fs/cgroup/test/cgroup.procs
6. cat /proc/$pid/numa_maps, the task is bound to NUMA 0 now.
The reproducer code:
int main()
{
struct bitmask *bmp;
int ret;
bmp = numa_parse_nodestring("1");
ret = set_mempolicy(MPOL_BIND | MPOL_F_NUMA_BALANCING, bmp->maskp, bmp->size + 1);
if (ret < 0) {
perror("Failed to call set_mempolicy");
exit(-1);
}
while (1);
return 0;
}
If I call set_mempolicy() without MPOL_F_NUMA_BALANCING. After step 5, the task is still bound to NUMA 1.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3] mm/mempolicy: fix mpol_rebind_nodemask() for MPOL_F_NUMA_BALANCING
2026-01-17 1:00 ` Jinjiang Tu
@ 2026-01-18 18:45 ` David Hildenbrand (Red Hat)
2026-01-19 11:46 ` Jinjiang Tu
0 siblings, 1 reply; 11+ messages in thread
From: David Hildenbrand (Red Hat) @ 2026-01-18 18:45 UTC (permalink / raw)
To: Jinjiang Tu, Andrew Morton
Cc: akpm, ziy, matthew.brost, joshua.hahnjy, rakie.kim, byungchul,
gourry, ying.huang, apopple, mgorman, linux-mm, wangkefeng.wang
On 1/17/26 02:00, Jinjiang Tu wrote:
>
> 在 2026/1/16 18:58, David Hildenbrand (Red Hat) 写道:
>> On 1/16/26 07:43, Jinjiang Tu wrote:
>>>
>>> 在 2026/1/16 2:12, Andrew Morton 写道:
>>>> On Thu, 15 Jan 2026 18:10:51 +0100 "David Hildenbrand (Red Hat)"
>>>> <david@kernel.org> wrote:
>>>>
>>>>> On 12/23/25 12:05, Jinjiang Tu wrote:
>>>>>> commit bda420b98505 ("numa balancing: migrate on fault among multiple
>>>>>> bound nodes") adds new flag MPOL_F_NUMA_BALANCING to enable NUMA
>>>>>> balancing
>>>>>> for MPOL_BIND memory policy.
>>>>>>
>>>>>> When the cpuset of tasks changes, the mempolicy of the task is
>>>>>> rebound by
>>>>>> mpol_rebind_nodemask(). When MPOL_F_STATIC_NODES and
>>>>>> MPOL_F_RELATIVE_NODES
>>>>>> are both not set, the behaviour of rebinding should be same whenever
>>>>>> MPOL_F_NUMA_BALANCING is set or not. So, when an application calls
>>>>>> set_mempolicy() with MPOL_F_NUMA_BALANCING set but both
>>>>>> MPOL_F_STATIC_NODES
>>>>>> and MPOL_F_RELATIVE_NODES cleared, mempolicy.w.cpuset_mems_allowed
>>>>>> should
>>>>>> be set to cpuset_current_mems_allowed nodemask. However, in current
>>>>>> implementation, mpol_store_user_nodemask() wrongly returns true,
>>>>>> causing
>>>>>> mempolicy->w.user_nodemask to be incorrectly set to the
>>>>>> user-specified
>>>>>> nodemask. Later, when the cpuset of the application changes,
>>>>>> mpol_rebind_nodemask() ends up rebinding based on the user-specified
>>>>>> nodemask rather than the cpuset_mems_allowed nodemask as intended.
>>>>>>
>>>>>> To fix this, only set mempolicy->w.user_nodemask to the
>>>>>> user-specified
>>>>>> nodemask if MPOL_F_STATIC_NODES or MPOL_F_RELATIVE_NODES is present.
>>>>>>
>>>>> ...
>>>>>
>>>>> I glimpsed over it and I think this is the right fix, thanks!
>>>>>
>>>>> Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
>>>> Cool. I decided this was "not for backporting", but the description of
>>>> the userspace-visible runtime effects isn't very clear. Jinjiang, can
>>>> you please advise?
>>>
>>> I agree don't backport this patch. Users can only see tasks binding to
>>> wrong NUMA after it's cpuset changes.
>>>
>>> Assuming there are 4 NUMA. task is binding to NUMA1 and it is in root
>>> cpuset.
>>> Move the task to a cpuset whose cpuset.mems.effective is 0-1. The
>>> task should
>>> still be binded to NUMA1, but is binded to NUMA0 wrongly.
>>
>> Do you think it's easy to write a reproducer to be run in a simple
>> QEMU VM with 4 nodes?
>
> I can reproduce with the following steps:
>
> 1. echo '+cpuset' > /sys/fs/cgroup/cgroup.subtree_control
> 2. mkdir /sys/fs/cgroup/test
> 3. ./reproducer &
> 4. cat /proc/$pid/numa_maps, the task is bound to NUMA 1
> 5. echo $pid > /sys/fs/cgroup/test/cgroup.procs
> 6. cat /proc/$pid/numa_maps, the task is bound to NUMA 0 now.
>
> The reproducer code:
>
> int main()
> {
> struct bitmask *bmp;
> int ret;
>
> bmp = numa_parse_nodestring("1");
> ret = set_mempolicy(MPOL_BIND | MPOL_F_NUMA_BALANCING, bmp->maskp, bmp->size + 1);
> if (ret < 0) {
> perror("Failed to call set_mempolicy");
> exit(-1);
> }
>
> while (1);
> return 0;
> }
>
> If I call set_mempolicy() without MPOL_F_NUMA_BALANCING. After step 5, the task is still bound to NUMA 1.
>
Great, can you incorporate that into an updated patch description?
And it might make sense to point at commit bda420b98505 ("numa
balancing: migrate on fault among multiple bound nodes") where we document
"
we add MPOL_F_NUMA_BALANCING mode flag to
set_mempolicy() when mode is MPOL_BIND. With the flag specified, NUMA
balancing will be enabled within the thread to optimize the page
placement within the constrains of the specified memory binding policy. "
The "within the constrains" is the crucial bit here.
--
Cheers
David
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3] mm/mempolicy: fix mpol_rebind_nodemask() for MPOL_F_NUMA_BALANCING
2026-01-18 18:45 ` David Hildenbrand (Red Hat)
@ 2026-01-19 11:46 ` Jinjiang Tu
0 siblings, 0 replies; 11+ messages in thread
From: Jinjiang Tu @ 2026-01-19 11:46 UTC (permalink / raw)
To: David Hildenbrand (Red Hat), Andrew Morton
Cc: akpm, ziy, matthew.brost, joshua.hahnjy, rakie.kim, byungchul,
gourry, ying.huang, apopple, mgorman, linux-mm, wangkefeng.wang
在 2026/1/19 2:45, David Hildenbrand (Red Hat) 写道:
> On 1/17/26 02:00, Jinjiang Tu wrote:
>>
>> 在 2026/1/16 18:58, David Hildenbrand (Red Hat) 写道:
>>> On 1/16/26 07:43, Jinjiang Tu wrote:
>>>>
>>>> 在 2026/1/16 2:12, Andrew Morton 写道:
>>>>> On Thu, 15 Jan 2026 18:10:51 +0100 "David Hildenbrand (Red Hat)"
>>>>> <david@kernel.org> wrote:
>>>>>
>>>>>> On 12/23/25 12:05, Jinjiang Tu wrote:
>>>>>>> commit bda420b98505 ("numa balancing: migrate on fault among
>>>>>>> multiple
>>>>>>> bound nodes") adds new flag MPOL_F_NUMA_BALANCING to enable NUMA
>>>>>>> balancing
>>>>>>> for MPOL_BIND memory policy.
>>>>>>>
>>>>>>> When the cpuset of tasks changes, the mempolicy of the task is
>>>>>>> rebound by
>>>>>>> mpol_rebind_nodemask(). When MPOL_F_STATIC_NODES and
>>>>>>> MPOL_F_RELATIVE_NODES
>>>>>>> are both not set, the behaviour of rebinding should be same
>>>>>>> whenever
>>>>>>> MPOL_F_NUMA_BALANCING is set or not. So, when an application calls
>>>>>>> set_mempolicy() with MPOL_F_NUMA_BALANCING set but both
>>>>>>> MPOL_F_STATIC_NODES
>>>>>>> and MPOL_F_RELATIVE_NODES cleared, mempolicy.w.cpuset_mems_allowed
>>>>>>> should
>>>>>>> be set to cpuset_current_mems_allowed nodemask. However, in current
>>>>>>> implementation, mpol_store_user_nodemask() wrongly returns true,
>>>>>>> causing
>>>>>>> mempolicy->w.user_nodemask to be incorrectly set to the
>>>>>>> user-specified
>>>>>>> nodemask. Later, when the cpuset of the application changes,
>>>>>>> mpol_rebind_nodemask() ends up rebinding based on the
>>>>>>> user-specified
>>>>>>> nodemask rather than the cpuset_mems_allowed nodemask as intended.
>>>>>>>
>>>>>>> To fix this, only set mempolicy->w.user_nodemask to the
>>>>>>> user-specified
>>>>>>> nodemask if MPOL_F_STATIC_NODES or MPOL_F_RELATIVE_NODES is
>>>>>>> present.
>>>>>>>
>>>>>> ...
>>>>>>
>>>>>> I glimpsed over it and I think this is the right fix, thanks!
>>>>>>
>>>>>> Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
>>>>> Cool. I decided this was "not for backporting", but the
>>>>> description of
>>>>> the userspace-visible runtime effects isn't very clear. Jinjiang, can
>>>>> you please advise?
>>>>
>>>> I agree don't backport this patch. Users can only see tasks binding to
>>>> wrong NUMA after it's cpuset changes.
>>>>
>>>> Assuming there are 4 NUMA. task is binding to NUMA1 and it is in root
>>>> cpuset.
>>>> Move the task to a cpuset whose cpuset.mems.effective is 0-1. The
>>>> task should
>>>> still be binded to NUMA1, but is binded to NUMA0 wrongly.
>>>
>>> Do you think it's easy to write a reproducer to be run in a simple
>>> QEMU VM with 4 nodes?
>>
>> I can reproduce with the following steps:
>>
>> 1. echo '+cpuset' > /sys/fs/cgroup/cgroup.subtree_control
>> 2. mkdir /sys/fs/cgroup/test
>> 3. ./reproducer &
>> 4. cat /proc/$pid/numa_maps, the task is bound to NUMA 1
>> 5. echo $pid > /sys/fs/cgroup/test/cgroup.procs
>> 6. cat /proc/$pid/numa_maps, the task is bound to NUMA 0 now.
>>
>> The reproducer code:
>>
>> int main()
>> {
>> struct bitmask *bmp;
>> int ret;
>>
>> bmp = numa_parse_nodestring("1");
>> ret = set_mempolicy(MPOL_BIND | MPOL_F_NUMA_BALANCING,
>> bmp->maskp, bmp->size + 1);
>> if (ret < 0) {
>> perror("Failed to call set_mempolicy");
>> exit(-1);
>> }
>>
>> while (1);
>> return 0;
>> }
>>
>> If I call set_mempolicy() without MPOL_F_NUMA_BALANCING. After step
>> 5, the task is still bound to NUMA 1.
>>
>
> Great, can you incorporate that into an updated patch description?
No problem, I will update it.
>
> And it might make sense to point at commit bda420b98505 ("numa
> balancing: migrate on fault among multiple bound nodes") where we
> document
>
> "
> we add MPOL_F_NUMA_BALANCING mode flag to
> set_mempolicy() when mode is MPOL_BIND. With the flag specified, NUMA
> balancing will be enabled within the thread to optimize the page
> placement within the constrains of the specified memory binding
> policy. "
>
> The "within the constrains" is the crucial bit here.
>
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2026-01-19 11:47 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-12-23 11:05 [PATCH v3] mm/mempolicy: fix mpol_rebind_nodemask() for MPOL_F_NUMA_BALANCING Jinjiang Tu
2026-01-13 1:52 ` Jinjiang Tu
2026-01-14 0:37 ` Andrew Morton
2026-01-14 1:23 ` Jinjiang Tu
2026-01-15 17:10 ` David Hildenbrand (Red Hat)
2026-01-15 18:12 ` Andrew Morton
2026-01-16 6:43 ` Jinjiang Tu
2026-01-16 10:58 ` David Hildenbrand (Red Hat)
2026-01-17 1:00 ` Jinjiang Tu
2026-01-18 18:45 ` David Hildenbrand (Red Hat)
2026-01-19 11:46 ` Jinjiang Tu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox