linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Suren Baghdasaryan <surenb@google.com>
To: Tianyang Zhang <zhangtianyang@loongson.cn>
Cc: Harry Yoo <harry.yoo@oracle.com>,
	akpm@linux-foundation.org, linux-mm@kvack.org,
	 linux-kernel@vger.kernel.org, Vlastimil Babka <vbabka@suse.cz>,
	 Michal Hocko <mhocko@suse.com>,
	Brendan Jackman <jackmanb@google.com>,
	 Johannes Weiner <hannes@cmpxchg.org>, Zi Yan <ziy@nvidia.com>
Subject: Re: [PATCH] mm/page_alloc.c: Avoid infinite retries caused by cpuset race
Date: Wed, 23 Apr 2025 08:35:11 -0700	[thread overview]
Message-ID: <CAJuCfpHBB+0HG_2ZJ4h683TYJEz__c+L3Z6RZUbzX+7R1_VSNg@mail.gmail.com> (raw)
In-Reply-To: <f7f831bc-8887-4974-a869-f5f473d3040c@loongson.cn>

On Tue, Apr 22, 2025 at 7:39 PM Tianyang Zhang
<zhangtianyang@loongson.cn> wrote:
>
> Hi, Suren
>
> 在 2025/4/22 上午4:28, Suren Baghdasaryan 写道:
> > On Mon, Apr 21, 2025 at 3:00 AM Harry Yoo <harry.yoo@oracle.com> wrote:
> >> On Wed, Apr 16, 2025 at 04:24:05PM +0800, Tianyang Zhang wrote:
> >>> __alloc_pages_slowpath has no change detection for ac->nodemask
> >>> in the part of retry path, while cpuset can modify it in parallel.
> >>> For some processes that set mempolicy as MPOL_BIND, this results
> >>> ac->nodemask changes, and then the should_reclaim_retry will
> >>> judge based on the latest nodemask and jump to retry, while the
> >>> get_page_from_freelist only traverses the zonelist from
> >>> ac->preferred_zoneref, which selected by a expired nodemask
> >>> and may cause infinite retries in some cases
> >>>
> >>> cpu 64:
> >>> __alloc_pages_slowpath {
> >>>          /* ..... */
> >>> retry:
> >>>          /* ac->nodemask = 0x1, ac->preferred->zone->nid = 1 */
> >>>          if (alloc_flags & ALLOC_KSWAPD)
> >>>                  wake_all_kswapds(order, gfp_mask, ac);
> >>>          /* cpu 1:
> >>>          cpuset_write_resmask
> >>>              update_nodemask
> >>>                  update_nodemasks_hier
> >>>                      update_tasks_nodemask
> >>>                          mpol_rebind_task
> >>>                           mpol_rebind_policy
> >>>                            mpol_rebind_nodemask
> >>>                // mempolicy->nodes has been modified,
> >>>                // which ac->nodemask point to
> >>>
> >>>          */
> >>>          /* ac->nodemask = 0x3, ac->preferred->zone->nid = 1 */
> >>>          if (should_reclaim_retry(gfp_mask, order, ac, alloc_flags,
> >>>                                   did_some_progress > 0, &no_progress_loops))
> >>>                  goto retry;
> >>> }
> >>>
> >>> Simultaneously starting multiple cpuset01 from LTP can quickly
> >>> reproduce this issue on a multi node server when the maximum
> >>> memory pressure is reached and the swap is enabled
> >>>
> >>> Signed-off-by: Tianyang Zhang <zhangtianyang@loongson.cn>
> >>> ---
> >> What commit does it fix and should it be backported to -stable?
> > I think it fixes 902b62810a57 ("mm, page_alloc: fix more premature OOM
> > due to race with cpuset update").
>
> I think this issue is unlikely to have been introduced by Patch
> 902b62810a57 ,
>
> as the infinite-reties section from
>
> https://elixir.bootlin.com/linux/v6.15-rc3/source/mm/page_alloc.c#L4568
> to
> https://elixir.bootlin.com/linux/v6.15-rc3/source/mm/page_alloc.c#L4628
>
> where the cpuset race condition occurs remains unmodified in the logic
> of Patch 902b62810a57.

Yeah, you are right. After looking into it some more, 902b62810a57 is
a wrong patch to blame for this infinite loop.

>
> >> There's a new 'MEMORY MANAGEMENT - PAGE ALLOCATOR' entry (only in
> >> Andrew's mm.git repository now).
> >>
> >> Let's Cc the page allocator folks here!
> >>
> >> --
> >> Cheers,
> >> Harry / Hyeonggon
> >>
> >>>   mm/page_alloc.c | 8 ++++++++
> >>>   1 file changed, 8 insertions(+)
> >>>
> >>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> >>> index fd6b865cb1ab..1e82f5214a42 100644
> >>> --- a/mm/page_alloc.c
> >>> +++ b/mm/page_alloc.c
> >>> @@ -4530,6 +4530,14 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> >>>        }
> >>>
> >>>   retry:
> >>> +     /*
> >>> +      * Deal with possible cpuset update races or zonelist updates to avoid
> >>> +      * infinite retries.
> >>> +      */
> >>> +     if (check_retry_cpuset(cpuset_mems_cookie, ac) ||
> >>> +         check_retry_zonelist(zonelist_iter_cookie))
> >>> +             goto restart;
> >>> +
> > We have this check later in this block:
> > https://elixir.bootlin.com/linux/v6.15-rc3/source/mm/page_alloc.c#L4652,
> > so IIUC you effectively are moving it to be called before
> > should_reclaim_retry(). If so, I think you should remove the old one
> > (the one I linked earlier) as it seems to be unnecessary duplication
> > at this point.
> In my understanding, the code in
>
> https://elixir.bootlin.com/linux/v6.15-rc3/source/mm/page_alloc.c#L4652
>
> was introduced to prevent unnecessary OOM (Out-of-Memory) conditions
> in__alloc_pages_may_oom.
>
> If old code is removed, the newly added code (on retry loop entry)
> cannot guarantee that the cpuset
>
> remains valid when the flow reaches in__alloc_pages_may_oom, especially
> if scheduling occurs during this section.

Well, rescheduling can happen even between
https://elixir.bootlin.com/linux/v6.15-rc3/source/mm/page_alloc.c#L4652
and https://elixir.bootlin.com/linux/v6.15-rc3/source/mm/page_alloc.c#L4657
but I see your point. Also should_reclaim_retry() does not include
zonelist change detection, so keeping the checks at
https://elixir.bootlin.com/linux/v6.15-rc3/source/mm/page_alloc.c#L4652
sounds like a good idea.

>
> Therefore, I think retaining the original code logic is necessary to
> ensure correctness under concurrency.
>
> >
> >
> >>>        /* Ensure kswapd doesn't accidentally go to sleep as long as we loop */
> >>>        if (alloc_flags & ALLOC_KSWAPD)
> >>>                wake_all_kswapds(order, gfp_mask, ac);
> >>> --
> >>> 2.20.1
> >>>
> >>>
> Thanks
>


  reply	other threads:[~2025-04-23 15:35 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-04-16  8:24 Tianyang Zhang
2025-04-21 10:00 ` Harry Yoo
2025-04-21 20:28   ` Suren Baghdasaryan
2025-04-23  2:38     ` Tianyang Zhang
2025-04-23 15:35       ` Suren Baghdasaryan [this message]
2025-05-14  7:15         ` Vlastimil Babka
2025-04-22 12:10   ` Tianyang Zhang
2025-04-23  0:11     ` Andrew Morton
2025-04-23  0:22       ` Suren Baghdasaryan
2025-05-11  3:07         ` Andrew Morton
2025-05-13 16:26           ` Suren Baghdasaryan
2025-05-13 19:16             ` Andrew Morton
2025-05-13 19:33               ` Suren Baghdasaryan
2025-05-14  7:34               ` Vlastimil Babka
2025-05-14 22:42                 ` Andrew Morton
2025-05-15  3:19                 ` Tianyang Zhang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAJuCfpHBB+0HG_2ZJ4h683TYJEz__c+L3Z6RZUbzX+7R1_VSNg@mail.gmail.com \
    --to=surenb@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=harry.yoo@oracle.com \
    --cc=jackmanb@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=vbabka@suse.cz \
    --cc=zhangtianyang@loongson.cn \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox