* [PATCH] mm:page_alloc: fix the NULL ac->nodemask in __alloc_pages_slowpath()
@ 2024-08-21 13:59 Zhongkun He
2024-08-21 15:00 ` Michal Hocko
0 siblings, 1 reply; 5+ messages in thread
From: Zhongkun He @ 2024-08-21 13:59 UTC (permalink / raw)
To: mhocko, akpm, mgorman, hannes
Cc: linux-mm, linux-kernel, lizefan.x, Zhongkun He
I found a problem in my test machine that should_reclaim_retry() do
not get the right node if i set the cpuset.mems
1.Test step and the machines.
------------
root@vm:/sys/fs/cgroup/test# numactl -H | grep size
node 0 size: 9477 MB
node 1 size: 10079 MB
node 2 size: 10079 MB
node 3 size: 10078 MB
root@vm:/sys/fs/cgroup/test# cat cpuset.mems
2
root@vm:/sys/fs/cgroup/test# stress --vm 1 --vm-bytes 12g --vm-keep
stress: info: [33430] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: FAIL: [33430] (425) <-- worker 33431 got signal 9
stress: WARN: [33430] (427) now reaping child worker processes
stress: FAIL: [33430] (461) failed run completed in 2s
2. reclaim_retry_zone info:
We can only alloc pages from node=2, but the reclaim_retry_zone is
node=0 and return true.
root@vm:/sys/kernel/debug/tracing# cat trace
stress-33431 [001] ..... 13223.617311: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=1 wmark_check=1
stress-33431 [001] ..... 13223.617682: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=2 wmark_check=1
stress-33431 [001] ..... 13223.618103: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=3 wmark_check=1
stress-33431 [001] ..... 13223.618454: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=4 wmark_check=1
stress-33431 [001] ..... 13223.618770: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=5 wmark_check=1
stress-33431 [001] ..... 13223.619150: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=6 wmark_check=1
stress-33431 [001] ..... 13223.619510: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=7 wmark_check=1
stress-33431 [001] ..... 13223.619850: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=8 wmark_check=1
stress-33431 [001] ..... 13223.620171: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=9 wmark_check=1
stress-33431 [001] ..... 13223.620533: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=10 wmark_check=1
stress-33431 [001] ..... 13223.620894: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=11 wmark_check=1
stress-33431 [001] ..... 13223.621224: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=12 wmark_check=1
stress-33431 [001] ..... 13223.621551: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=13 wmark_check=1
stress-33431 [001] ..... 13223.621847: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=14 wmark_check=1
stress-33431 [001] ..... 13223.622200: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=15 wmark_check=1
stress-33431 [001] ..... 13223.622580: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=16 wmark_check=1
3. Root cause:
Nodemask usually comes from mempolicy in policy_nodemask(), which
is always NULL unless the memory policy is bind or prefer_many.
nodemask = NULL
__alloc_pages_noprof()
prepare_alloc_pages
ac->nodemask = &cpuset_current_mems_allowed;
get_page_from_freelist()
ac.nodemask = nodemask; /*set NULL*/
__alloc_pages_slowpath() {
f (!(alloc_flags & ALLOC_CPUSET) || reserve_flags) {
ac->nodemask = NULL;
ac->preferred_zoneref = first_zones_zonelist(ac->zonelist,
ac->highest_zoneidx, ac->nodemask);
/* so ac.nodemask = NULL */
}
According to the function flow above, we do not have the memory limit to
follow cpuset.mems, so we need to add it.
Test result:
Try 3 times with different cpuset.mems and alloc large memorys than that numa size.
echo 1 > cpuset.mems
stress --vm 1 --vm-bytes 12g --vm-hang 0
---------------
echo 2 > cpuset.mems
stress --vm 1 --vm-bytes 12g --vm-hang 0
---------------
echo 3 > cpuset.mems
stress --vm 1 --vm-bytes 12g --vm-hang 0
The retry trace look like:
stress-2139 [003] ..... 666.934104: reclaim_retry_zone: node=1 zone=Normal order=0 reclaimable=7 available=7355 min_wmark=8598 no_progress_loops=1 wmark_check=0
stress-2204 [010] ..... 695.447393: reclaim_retry_zone: node=2 zone=Normal order=0 reclaimable=2 available=6916 min_wmark=8598 no_progress_loops=1 wmark_check=0
stress-2271 [008] ..... 725.683058: reclaim_retry_zone: node=3 zone=Normal order=0 reclaimable=17 available=8079 min_wmark=8597 no_progress_loops=1 wmark_check=0
With this patch, we can check the right node and get less retry in __alloc_pages_slowpath()
because there is nothing to do.
Signed-off-by: Zhongkun He <hezhongkun.hzk@bytedance.com>
---
mm/page_alloc.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 29608ca294cf..5ea63bb8f8ff 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4338,6 +4338,9 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
ac->nodemask = NULL;
ac->preferred_zoneref = first_zones_zonelist(ac->zonelist,
ac->highest_zoneidx, ac->nodemask);
+ } else if (in_task() && !ac->nodemask) {
+ /* Set the nodemask if the request comes from user space. */
+ ac->nodemask = &cpuset_current_mems_allowed;
}
/* Attempt with potentially adjusted zonelist and alloc_flags */
--
2.20.1
^ permalink raw reply [flat|nested] 5+ messages in thread* Re: [PATCH] mm:page_alloc: fix the NULL ac->nodemask in __alloc_pages_slowpath()
2024-08-21 13:59 [PATCH] mm:page_alloc: fix the NULL ac->nodemask in __alloc_pages_slowpath() Zhongkun He
@ 2024-08-21 15:00 ` Michal Hocko
2024-08-22 3:15 ` [External] " Zhongkun He
0 siblings, 1 reply; 5+ messages in thread
From: Michal Hocko @ 2024-08-21 15:00 UTC (permalink / raw)
To: Zhongkun He; +Cc: akpm, mgorman, hannes, linux-mm, linux-kernel, lizefan.x
On Wed 21-08-24 21:59:00, Zhongkun He wrote:
> I found a problem in my test machine that should_reclaim_retry() do
> not get the right node if i set the cpuset.mems
>
> 1.Test step and the machines.
> ------------
> root@vm:/sys/fs/cgroup/test# numactl -H | grep size
> node 0 size: 9477 MB
> node 1 size: 10079 MB
> node 2 size: 10079 MB
> node 3 size: 10078 MB
>
> root@vm:/sys/fs/cgroup/test# cat cpuset.mems
> 2
>
> root@vm:/sys/fs/cgroup/test# stress --vm 1 --vm-bytes 12g --vm-keep
> stress: info: [33430] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
> stress: FAIL: [33430] (425) <-- worker 33431 got signal 9
> stress: WARN: [33430] (427) now reaping child worker processes
> stress: FAIL: [33430] (461) failed run completed in 2s
OK, so the test gets killed as expected.
> 2. reclaim_retry_zone info:
>
> We can only alloc pages from node=2, but the reclaim_retry_zone is
> node=0 and return true.
>
> root@vm:/sys/kernel/debug/tracing# cat trace
> stress-33431 [001] ..... 13223.617311: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=1 wmark_check=1
> stress-33431 [001] ..... 13223.617682: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=2 wmark_check=1
> stress-33431 [001] ..... 13223.618103: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=3 wmark_check=1
> stress-33431 [001] ..... 13223.618454: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=4 wmark_check=1
> stress-33431 [001] ..... 13223.618770: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=5 wmark_check=1
> stress-33431 [001] ..... 13223.619150: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=6 wmark_check=1
> stress-33431 [001] ..... 13223.619510: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=7 wmark_check=1
> stress-33431 [001] ..... 13223.619850: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=8 wmark_check=1
> stress-33431 [001] ..... 13223.620171: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=9 wmark_check=1
> stress-33431 [001] ..... 13223.620533: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=10 wmark_check=1
> stress-33431 [001] ..... 13223.620894: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=11 wmark_check=1
> stress-33431 [001] ..... 13223.621224: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=12 wmark_check=1
> stress-33431 [001] ..... 13223.621551: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=13 wmark_check=1
> stress-33431 [001] ..... 13223.621847: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=14 wmark_check=1
> stress-33431 [001] ..... 13223.622200: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=15 wmark_check=1
> stress-33431 [001] ..... 13223.622580: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=16 wmark_check=1
Are you suggesting that the problem is that should_reclaim_retry is
iterating nodes which are not allowed by cpusets and that makes the
retry loop happening more than unnecessary?
Is there any reason why you haven't done the same that the page
allocator does in this case?
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 28f80daf5c04..cbf09c0e3b8a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4098,6 +4098,11 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
unsigned long min_wmark = min_wmark_pages(zone);
bool wmark;
+ if (cpusets_enabled() &&
+ (alloc_flags & ALLOC_CPUSET) &&
+ !__cpuset_zone_allowed(zone, gfp_mask))
+ continue;
+
available = reclaimable = zone_reclaimable_pages(zone);
available += zone_page_state_snapshot(zone, NR_FREE_PAGES);
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [External] Re: [PATCH] mm:page_alloc: fix the NULL ac->nodemask in __alloc_pages_slowpath()
2024-08-21 15:00 ` Michal Hocko
@ 2024-08-22 3:15 ` Zhongkun He
2024-08-22 6:24 ` Michal Hocko
0 siblings, 1 reply; 5+ messages in thread
From: Zhongkun He @ 2024-08-22 3:15 UTC (permalink / raw)
To: Michal Hocko; +Cc: akpm, mgorman, hannes, linux-mm, linux-kernel, lizefan.x
On Wed, Aug 21, 2024 at 11:00 PM Michal Hocko <mhocko@suse.com> wrote:
>
> On Wed 21-08-24 21:59:00, Zhongkun He wrote:
> > I found a problem in my test machine that should_reclaim_retry() do
> > not get the right node if i set the cpuset.mems
> >
> > 1.Test step and the machines.
> > ------------
> > root@vm:/sys/fs/cgroup/test# numactl -H | grep size
> > node 0 size: 9477 MB
> > node 1 size: 10079 MB
> > node 2 size: 10079 MB
> > node 3 size: 10078 MB
> >
> > root@vm:/sys/fs/cgroup/test# cat cpuset.mems
> > 2
> >
> > root@vm:/sys/fs/cgroup/test# stress --vm 1 --vm-bytes 12g --vm-keep
> > stress: info: [33430] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
> > stress: FAIL: [33430] (425) <-- worker 33431 got signal 9
> > stress: WARN: [33430] (427) now reaping child worker processes
> > stress: FAIL: [33430] (461) failed run completed in 2s
>
> OK, so the test gets killed as expected.
>
> > 2. reclaim_retry_zone info:
> >
> > We can only alloc pages from node=2, but the reclaim_retry_zone is
> > node=0 and return true.
> >
> > root@vm:/sys/kernel/debug/tracing# cat trace
> > stress-33431 [001] ..... 13223.617311: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=1 wmark_check=1
> > stress-33431 [001] ..... 13223.617682: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=2 wmark_check=1
> > stress-33431 [001] ..... 13223.618103: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=3 wmark_check=1
> > stress-33431 [001] ..... 13223.618454: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=4 wmark_check=1
> > stress-33431 [001] ..... 13223.618770: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=5 wmark_check=1
> > stress-33431 [001] ..... 13223.619150: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=6 wmark_check=1
> > stress-33431 [001] ..... 13223.619510: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=7 wmark_check=1
> > stress-33431 [001] ..... 13223.619850: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=8 wmark_check=1
> > stress-33431 [001] ..... 13223.620171: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=9 wmark_check=1
> > stress-33431 [001] ..... 13223.620533: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=10 wmark_check=1
> > stress-33431 [001] ..... 13223.620894: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=11 wmark_check=1
> > stress-33431 [001] ..... 13223.621224: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=12 wmark_check=1
> > stress-33431 [001] ..... 13223.621551: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=13 wmark_check=1
> > stress-33431 [001] ..... 13223.621847: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=14 wmark_check=1
> > stress-33431 [001] ..... 13223.622200: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=15 wmark_check=1
> > stress-33431 [001] ..... 13223.622580: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=16 wmark_check=1
>
> Are you suggesting that the problem is that should_reclaim_retry is
> iterating nodes which are not allowed by cpusets and that makes the
> retry loop happening more than unnecessary?
Yes, exactly.
>
> Is there any reason why you haven't done the same that the page
> allocator does in this case?
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 28f80daf5c04..cbf09c0e3b8a 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -4098,6 +4098,11 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
> unsigned long min_wmark = min_wmark_pages(zone);
> bool wmark;
>
> + if (cpusets_enabled() &&
> + (alloc_flags & ALLOC_CPUSET) &&
> + !__cpuset_zone_allowed(zone, gfp_mask))
> + continue;
> +
> available = reclaimable = zone_reclaimable_pages(zone);
> available += zone_page_state_snapshot(zone, NR_FREE_PAGES);
>
That was my original version, but I found that the problem exists in
other places.
Please see the function flow below.
__alloc_pages_slowpath:
get_page_from_freelist
__cpuset_zone_allowed /* check the node */
__alloc_pages_direct_reclaim
shrink_zones
cpuset_zone_allowed()/* check the node */
__alloc_pages_direct_compact
try_to_compact_pages
/* do not check the cpuset_zone_allowed()*/
should_reclaim_retry
/* do not check the cpuset_zone_allowed()*/
should_compact_retry
compaction_zonelist_suitable
/* do not check the cpuset_zone_allowed()*/
Should we add __cpuset_zone_allowed() checks in the three functions
listed above,
or should we set the nodemask in __alloc_pages_slowpath() if it is empty
and the request comes from user space?
Adding checks respectively in the three functions might be safer and
easier to review.
It would be better if you had any suggestions.
Thanks.
> --
> Michal Hocko
> SUSE Labs
^ permalink raw reply [flat|nested] 5+ messages in thread* Re: [External] Re: [PATCH] mm:page_alloc: fix the NULL ac->nodemask in __alloc_pages_slowpath()
2024-08-22 3:15 ` [External] " Zhongkun He
@ 2024-08-22 6:24 ` Michal Hocko
2024-08-22 6:39 ` Zhongkun He
0 siblings, 1 reply; 5+ messages in thread
From: Michal Hocko @ 2024-08-22 6:24 UTC (permalink / raw)
To: Zhongkun He; +Cc: akpm, mgorman, hannes, linux-mm, linux-kernel, lizefan.x
On Thu 22-08-24 11:15:34, Zhongkun He wrote:
> On Wed, Aug 21, 2024 at 11:00 PM Michal Hocko <mhocko@suse.com> wrote:
> >
> > On Wed 21-08-24 21:59:00, Zhongkun He wrote:
> > > I found a problem in my test machine that should_reclaim_retry() do
> > > not get the right node if i set the cpuset.mems
> > >
> > > 1.Test step and the machines.
> > > ------------
> > > root@vm:/sys/fs/cgroup/test# numactl -H | grep size
> > > node 0 size: 9477 MB
> > > node 1 size: 10079 MB
> > > node 2 size: 10079 MB
> > > node 3 size: 10078 MB
> > >
> > > root@vm:/sys/fs/cgroup/test# cat cpuset.mems
> > > 2
> > >
> > > root@vm:/sys/fs/cgroup/test# stress --vm 1 --vm-bytes 12g --vm-keep
> > > stress: info: [33430] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
> > > stress: FAIL: [33430] (425) <-- worker 33431 got signal 9
> > > stress: WARN: [33430] (427) now reaping child worker processes
> > > stress: FAIL: [33430] (461) failed run completed in 2s
> >
> > OK, so the test gets killed as expected.
> >
> > > 2. reclaim_retry_zone info:
> > >
> > > We can only alloc pages from node=2, but the reclaim_retry_zone is
> > > node=0 and return true.
> > >
> > > root@vm:/sys/kernel/debug/tracing# cat trace
> > > stress-33431 [001] ..... 13223.617311: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=1 wmark_check=1
> > > stress-33431 [001] ..... 13223.617682: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=2 wmark_check=1
> > > stress-33431 [001] ..... 13223.618103: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=3 wmark_check=1
> > > stress-33431 [001] ..... 13223.618454: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=4 wmark_check=1
> > > stress-33431 [001] ..... 13223.618770: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=5 wmark_check=1
> > > stress-33431 [001] ..... 13223.619150: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=6 wmark_check=1
> > > stress-33431 [001] ..... 13223.619510: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=7 wmark_check=1
> > > stress-33431 [001] ..... 13223.619850: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=8 wmark_check=1
> > > stress-33431 [001] ..... 13223.620171: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=9 wmark_check=1
> > > stress-33431 [001] ..... 13223.620533: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=10 wmark_check=1
> > > stress-33431 [001] ..... 13223.620894: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=11 wmark_check=1
> > > stress-33431 [001] ..... 13223.621224: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=12 wmark_check=1
> > > stress-33431 [001] ..... 13223.621551: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=13 wmark_check=1
> > > stress-33431 [001] ..... 13223.621847: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=14 wmark_check=1
> > > stress-33431 [001] ..... 13223.622200: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=15 wmark_check=1
> > > stress-33431 [001] ..... 13223.622580: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=16 wmark_check=1
> >
> > Are you suggesting that the problem is that should_reclaim_retry is
> > iterating nodes which are not allowed by cpusets and that makes the
> > retry loop happening more than unnecessary?
>
> Yes, exactly.
>
> >
> > Is there any reason why you haven't done the same that the page
> > allocator does in this case?
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 28f80daf5c04..cbf09c0e3b8a 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -4098,6 +4098,11 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
> > unsigned long min_wmark = min_wmark_pages(zone);
> > bool wmark;
> >
> > + if (cpusets_enabled() &&
> > + (alloc_flags & ALLOC_CPUSET) &&
> > + !__cpuset_zone_allowed(zone, gfp_mask))
> > + continue;
> > +
> > available = reclaimable = zone_reclaimable_pages(zone);
> > available += zone_page_state_snapshot(zone, NR_FREE_PAGES);
> >
>
> That was my original version, but I found that the problem exists in
> other places.
> Please see the function flow below.
>
> __alloc_pages_slowpath:
>
> get_page_from_freelist
> __cpuset_zone_allowed /* check the node */
>
> __alloc_pages_direct_reclaim
> shrink_zones
> cpuset_zone_allowed()/* check the node */
>
> __alloc_pages_direct_compact
> try_to_compact_pages
> /* do not check the cpuset_zone_allowed()*/
>
> should_reclaim_retry
> /* do not check the cpuset_zone_allowed()*/
>
> should_compact_retry
> compaction_zonelist_suitable
> /* do not check the cpuset_zone_allowed()*/
>
> Should we add __cpuset_zone_allowed() checks in the three functions
> listed above,
> or should we set the nodemask in __alloc_pages_slowpath() if it is empty
> and the request comes from user space?
cpuset integration into the page allocator is rather complex (check
ALLOC_CPUSET) use. Reviewing your change is not an easy task to make
sure all the subtlety is preserved. Therefore I would suggest addressing
the specific issue you have found.
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [External] Re: [PATCH] mm:page_alloc: fix the NULL ac->nodemask in __alloc_pages_slowpath()
2024-08-22 6:24 ` Michal Hocko
@ 2024-08-22 6:39 ` Zhongkun He
0 siblings, 0 replies; 5+ messages in thread
From: Zhongkun He @ 2024-08-22 6:39 UTC (permalink / raw)
To: Michal Hocko; +Cc: akpm, mgorman, hannes, linux-mm, linux-kernel, lizefan.x
> > > Are you suggesting that the problem is that should_reclaim_retry is
> > > iterating nodes which are not allowed by cpusets and that makes the
> > > retry loop happening more than unnecessary?
> >
> > Yes, exactly.
> >
> > >
> > > Is there any reason why you haven't done the same that the page
> > > allocator does in this case?
> > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > > index 28f80daf5c04..cbf09c0e3b8a 100644
> > > --- a/mm/page_alloc.c
> > > +++ b/mm/page_alloc.c
> > > @@ -4098,6 +4098,11 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
> > > unsigned long min_wmark = min_wmark_pages(zone);
> > > bool wmark;
> > >
> > > + if (cpusets_enabled() &&
> > > + (alloc_flags & ALLOC_CPUSET) &&
> > > + !__cpuset_zone_allowed(zone, gfp_mask))
> > > + continue;
> > > +
> > > available = reclaimable = zone_reclaimable_pages(zone);
> > > available += zone_page_state_snapshot(zone, NR_FREE_PAGES);
> > >
> >
> > That was my original version, but I found that the problem exists in
> > other places.
> > Please see the function flow below.
> >
> > __alloc_pages_slowpath:
> >
> > get_page_from_freelist
> > __cpuset_zone_allowed /* check the node */
> >
> > __alloc_pages_direct_reclaim
> > shrink_zones
> > cpuset_zone_allowed()/* check the node */
> >
> > __alloc_pages_direct_compact
> > try_to_compact_pages
> > /* do not check the cpuset_zone_allowed()*/
> >
> > should_reclaim_retry
> > /* do not check the cpuset_zone_allowed()*/
> >
> > should_compact_retry
> > compaction_zonelist_suitable
> > /* do not check the cpuset_zone_allowed()*/
> >
> > Should we add __cpuset_zone_allowed() checks in the three functions
> > listed above,
> > or should we set the nodemask in __alloc_pages_slowpath() if it is empty
> > and the request comes from user space?
>
> cpuset integration into the page allocator is rather complex (check
> ALLOC_CPUSET) use. Reviewing your change is not an easy task to make
> sure all the subtlety is preserved. Therefore I would suggest addressing
> the specific issue you have found.
>
Got it,thanks for your suggestion, i will send the next version soon.
> --
> Michal Hocko
> SUSE Labs
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2024-08-22 6:40 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-08-21 13:59 [PATCH] mm:page_alloc: fix the NULL ac->nodemask in __alloc_pages_slowpath() Zhongkun He
2024-08-21 15:00 ` Michal Hocko
2024-08-22 3:15 ` [External] " Zhongkun He
2024-08-22 6:24 ` Michal Hocko
2024-08-22 6:39 ` Zhongkun He
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox