linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Raghavendra K T <raghavendra.kt@amd.com>
To: Wupeng Ma <mawupeng1@huawei.com>,
	akpm@linux-foundation.org, vbabka@suse.cz
Cc: surenb@google.com, jackmanb@google.com, hannes@cmpxchg.org,
	ziy@nvidia.com, wangkefeng.wang@huawei.com, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org, Nikhil Dhama <nikhil.dhama@amd.com>,
	"Huang, Ying" <ying.huang@linux.alibaba.com>,
	"Rao, Bharata Bhasker" <bharata@amd.com>
Subject: Re: [RFC PATCH] mm: Drain PCP during direct reclaim
Date: Wed, 11 Jun 2025 13:25:55 +0530	[thread overview]
Message-ID: <ba26fba0-f623-4857-8a2c-6c7d0967287e@amd.com> (raw)
In-Reply-To: <20250606065930.3535912-1-mawupeng1@huawei.com>

++
On 6/6/2025 12:29 PM, Wupeng Ma wrote:
> Memory retained in Per-CPU Pages (PCP) caches can prevent hugepage
> allocations from succeeding despite sufficient free system memory. This
> occurs because:
> 1. Hugepage allocations don't actively trigger PCP draining
> 2. Direct reclaim path fails to trigger drain_all_pages() when:
>     a) All zone pages are free/hugetlb (!did_some_progress)
>     b) Compaction skips due to costly order watermarks (COMPACT_SKIPPED)
> 
> Reproduction:
>    - Alloc page and free the page via put_page to release to pcp
>    - Observe hugepage reservation failure
> 
> Solution:
>    Actively drain PCP during direct reclaim for memory allocations.
>    This increases page allocation success rate by making stranded pages
>    available to any order allocations.
> 
> Verification:
>    This issue can be reproduce easily in zone movable with the following
>    step:
> 
> w/o this patch
>    # numactl -m 2 dd if=/dev/urandom of=/dev/shm/testfile bs=4k count=64
>    # rm -f /dev/shm/testfile
>    # sync
>    # echo 3 > /proc/sys/vm/drop_caches
>    # echo 2048 > /sys/devices/system/node/node2/hugepages/hugepages-2048kB/nr_hugepages
>    # cat /sys/devices/system/node/node2/hugepages/hugepages-2048kB/nr_hugepages
>      2029
> 
> w/ this patch
>    # numactl -m 2 dd if=/dev/urandom of=/dev/shm/testfile bs=4k count=64
>    # rm -f /dev/shm/testfile
>    # sync
>    # echo 3 > /proc/sys/vm/drop_caches
>    # echo 2048 > /sys/devices/system/node/node2/hugepages/hugepages-2048kB/nr_hugepages
>    # cat /sys/devices/system/node/node2/hugepages/hugepages-2048kB/nr_hugepages
>      2047
> 

Hello Wupeng Ma,

Can you also post results of iperf/netperf for this patch in future?

Thanks and Regards
- Raghu



      parent reply	other threads:[~2025-06-11  7:56 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-06  6:59 Wupeng Ma
2025-06-06 11:19 ` Johannes Weiner
2025-06-10  9:18   ` mawupeng
2025-06-11  7:55 ` Raghavendra K T [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ba26fba0-f623-4857-8a2c-6c7d0967287e@amd.com \
    --to=raghavendra.kt@amd.com \
    --cc=akpm@linux-foundation.org \
    --cc=bharata@amd.com \
    --cc=hannes@cmpxchg.org \
    --cc=jackmanb@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mawupeng1@huawei.com \
    --cc=nikhil.dhama@amd.com \
    --cc=surenb@google.com \
    --cc=vbabka@suse.cz \
    --cc=wangkefeng.wang@huawei.com \
    --cc=ying.huang@linux.alibaba.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox