From: mawupeng <mawupeng1@huawei.com>
To: <hannes@cmpxchg.org>
Cc: <mawupeng1@huawei.com>, <akpm@linux-foundation.org>,
<vbabka@suse.cz>, <surenb@google.com>, <jackmanb@google.com>,
<ziy@nvidia.com>, <wangkefeng.wang@huawei.com>,
<linux-mm@kvack.org>, <linux-kernel@vger.kernel.org>
Subject: Re: [RFC PATCH] mm: Drain PCP during direct reclaim
Date: Tue, 10 Jun 2025 17:18:10 +0800 [thread overview]
Message-ID: <7691fac7-f569-4fc2-9d0e-f7dad4139261@huawei.com> (raw)
In-Reply-To: <20250606111953.GB1118@cmpxchg.org>
On 2025/6/6 19:19, Johannes Weiner wrote:
> On Fri, Jun 06, 2025 at 02:59:30PM +0800, Wupeng Ma wrote:
>> Memory retained in Per-CPU Pages (PCP) caches can prevent hugepage
>> allocations from succeeding despite sufficient free system memory. This
>> occurs because:
>> 1. Hugepage allocations don't actively trigger PCP draining
>> 2. Direct reclaim path fails to trigger drain_all_pages() when:
>> a) All zone pages are free/hugetlb (!did_some_progress)
>> b) Compaction skips due to costly order watermarks (COMPACT_SKIPPED)
>
> This doesn't sound quite right. Direct reclaim skips when compaction
> is suitable. Compaction says COMPACT_SKIPPED when it *isn't* suitable.
>
> So if direct reclaim didn't drain, presumably compaction ran but
> returned COMPLETE or PARTIAL_SKIPPED because the freelist checks in
> __compact_finished() never succeed due to the pcp?
Yes, compaction do run, however since all pages in this movable node
are free or in pcp. there is no way for compaction to reclaim a page.
>
>> @@ -4137,28 +4137,22 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
>> {
>> struct page *page = NULL;
>> unsigned long pflags;
>> - bool drained = false;
>>
>> psi_memstall_enter(&pflags);
>> *did_some_progress = __perform_reclaim(gfp_mask, order, ac);
>> - if (unlikely(!(*did_some_progress)))
>> - goto out;
>> -
>> -retry:
>> - page = get_page_from_freelist(gfp_mask, order, alloc_flags, ac);
>> + if (likely(*did_some_progress))
>> + page = get_page_from_freelist(gfp_mask, order, alloc_flags, ac);
>>
>> /*
>> * If an allocation failed after direct reclaim, it could be because
>> * pages are pinned on the per-cpu lists or in high alloc reserves.
>> * Shrink them and try again
>> */
>> - if (!page && !drained) {
>> + if (!page) {
>> unreserve_highatomic_pageblock(ac, false);
>> drain_all_pages(NULL);
>> - drained = true;
>> - goto retry;
>> + page = get_page_from_freelist(gfp_mask, order, alloc_flags, ac);
>
> This seems like the wrong place to fix the issue.
>
> Kcompactd has a drain_all_pages() call. Move that to compact_zone(),
> so that it also applies to the try_to_compact_pages() path?
Since there is no pages isolated during isolate_migratepages(), it is
strange to drain_pcp here?
next prev parent reply other threads:[~2025-06-10 9:18 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-06-06 6:59 Wupeng Ma
2025-06-06 11:19 ` Johannes Weiner
2025-06-10 9:18 ` mawupeng [this message]
2025-06-11 7:55 ` Raghavendra K T
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7691fac7-f569-4fc2-9d0e-f7dad4139261@huawei.com \
--to=mawupeng1@huawei.com \
--cc=akpm@linux-foundation.org \
--cc=hannes@cmpxchg.org \
--cc=jackmanb@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=surenb@google.com \
--cc=vbabka@suse.cz \
--cc=wangkefeng.wang@huawei.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox