linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Johannes Weiner <hannes@cmpxchg.org>
To: Wupeng Ma <mawupeng1@huawei.com>
Cc: akpm@linux-foundation.org, vbabka@suse.cz, surenb@google.com,
	jackmanb@google.com, ziy@nvidia.com, wangkefeng.wang@huawei.com,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH] mm: Drain PCP during direct reclaim
Date: Fri, 6 Jun 2025 13:19:53 +0200	[thread overview]
Message-ID: <20250606111953.GB1118@cmpxchg.org> (raw)
In-Reply-To: <20250606065930.3535912-1-mawupeng1@huawei.com>

On Fri, Jun 06, 2025 at 02:59:30PM +0800, Wupeng Ma wrote:
> Memory retained in Per-CPU Pages (PCP) caches can prevent hugepage
> allocations from succeeding despite sufficient free system memory. This
> occurs because:
> 1. Hugepage allocations don't actively trigger PCP draining
> 2. Direct reclaim path fails to trigger drain_all_pages() when:
>    a) All zone pages are free/hugetlb (!did_some_progress)
>    b) Compaction skips due to costly order watermarks (COMPACT_SKIPPED)

This doesn't sound quite right. Direct reclaim skips when compaction
is suitable. Compaction says COMPACT_SKIPPED when it *isn't* suitable.

So if direct reclaim didn't drain, presumably compaction ran but
returned COMPLETE or PARTIAL_SKIPPED because the freelist checks in
__compact_finished() never succeed due to the pcp?

> @@ -4137,28 +4137,22 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
>  {
>  	struct page *page = NULL;
>  	unsigned long pflags;
> -	bool drained = false;
>  
>  	psi_memstall_enter(&pflags);
>  	*did_some_progress = __perform_reclaim(gfp_mask, order, ac);
> -	if (unlikely(!(*did_some_progress)))
> -		goto out;
> -
> -retry:
> -	page = get_page_from_freelist(gfp_mask, order, alloc_flags, ac);
> +	if (likely(*did_some_progress))
> +		page = get_page_from_freelist(gfp_mask, order, alloc_flags, ac);
>  
>  	/*
>  	 * If an allocation failed after direct reclaim, it could be because
>  	 * pages are pinned on the per-cpu lists or in high alloc reserves.
>  	 * Shrink them and try again
>  	 */
> -	if (!page && !drained) {
> +	if (!page) {
>  		unreserve_highatomic_pageblock(ac, false);
>  		drain_all_pages(NULL);
> -		drained = true;
> -		goto retry;
> +		page = get_page_from_freelist(gfp_mask, order, alloc_flags, ac);

This seems like the wrong place to fix the issue.

Kcompactd has a drain_all_pages() call. Move that to compact_zone(),
so that it also applies to the try_to_compact_pages() path?


  reply	other threads:[~2025-06-06 11:20 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-06  6:59 Wupeng Ma
2025-06-06 11:19 ` Johannes Weiner [this message]
2025-06-10  9:18   ` mawupeng
2025-06-11  7:55 ` Raghavendra K T

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250606111953.GB1118@cmpxchg.org \
    --to=hannes@cmpxchg.org \
    --cc=akpm@linux-foundation.org \
    --cc=jackmanb@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mawupeng1@huawei.com \
    --cc=surenb@google.com \
    --cc=vbabka@suse.cz \
    --cc=wangkefeng.wang@huawei.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox