linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Huang, Ying" <ying.huang@intel.com>
To: mawupeng <mawupeng1@huawei.com>
Cc: <mhocko@suse.com>,  <akpm@linux-foundation.org>,
	<mgorman@techsingularity.net>,  <dmaluka@chromium.org>,
	<liushixin2@huawei.com>,  <wangkefeng.wang@huawei.com>,
	<linux-mm@kvack.org>,  <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] mm, proc: collect percpu free pages into the free pages
Date: Wed, 11 Sep 2024 13:37:21 +0800	[thread overview]
Message-ID: <87h6amwy26.fsf@yhuang6-desk2.ccr.corp.intel.com> (raw)
In-Reply-To: <26e53efe-7a54-499a-8d3f-345d29d90348@huawei.com> (mawupeng's message of "Tue, 10 Sep 2024 20:11:36 +0800")

mawupeng <mawupeng1@huawei.com> writes:

> On 2024/9/4 15:28, Michal Hocko wrote:
>> On Wed 04-09-24 14:49:20, mawupeng wrote:
>>>
>>>
>>> On 2024/9/3 16:09, Michal Hocko wrote:
>>>> On Tue 03-09-24 09:50:48, mawupeng wrote:
>>>>>> Drain remote PCP may be not that expensive now after commit 4b23a68f9536
>>>>>> ("mm/page_alloc: protect PCP lists with a spinlock").  No IPI is needed
>>>>>> to drain the remote PCP.
>>>>>
>>>>> This looks really great, we can think a way to drop pcp before goto slowpath
>>>>> before swap.
>>>>
>>>> We currently drain after first unsuccessful direct reclaim run. Is that
>>>> insufficient? 
>>>
>>> The reason i said the drain of pcp is insufficient or expensive is based
>>> on you comment[1] :-). Since IPIs is not requiered since commit 4b23a68f9536
>>> ("mm/page_alloc: protect PCP lists with a spinlock"). This could be much
>>> better.
>>>
>>> [1]: https://lore.kernel.org/linux-mm/ZWRYZmulV0B-Jv3k@tiehlicka/
>> 
>> there are other reasons I have mentioned in that reply which play role
>> as well.
>> 
>>>> Should we do a less aggressive draining sooner? Ideally
>>>> restricted to cpus on the same NUMA node maybe? Do you have any specific
>>>> workloads that would benefit from this?
>>>
>>> Current the problem is amount the pcp, which can increase to 4.6%(24644M)
>>> of the total 512G memory.
>> 
>> Why is that a problem? 
>
> MemAvailable
>               An estimate of how much memory is available for starting new
>               applications, without swapping. Calculated from MemFree,
>               SReclaimable, the size of the file LRU lists, and the low
>               watermarks in each zone.
>
> The PCP memory is essentially available memory and will be reclaimed before OOM.
> In essence, it is not fundamentally different from reclaiming file pages, as both
> are reclaimed within __alloc_pages_direct_reclaim. Therefore, why shouldn't it be
> included in MemAvailable to avoid confusion.
>
> __alloc_pages_direct_reclaim
>   __perform_reclaim
>   if (!page && !drained)
>     drain_all_pages(NULL);
>
>
>> Just because some tools are miscalculating memory
>> pressure because they are based on MemAvailable? Or does this lead to
>> performance regressions on the kernel side? In other words would the
>> same workload behaved better if the amount of pcp-cache was reduced
>> without any userspace intervention?

Back to the original PCP cache issue.  I want to make sure that whether
PCP auto-tuning works properly on your system.  If so, the total PCP
pages should be less than the sum of the low watermark of zones.  Can
you verify that first?

--
Best Regards,
Huang, Ying


      parent reply	other threads:[~2024-09-11  5:41 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-08-30  1:44 Wupeng Ma
2024-08-30  7:53 ` Huang, Ying
2024-09-02  1:11   ` mawupeng
2024-09-02  1:29     ` Huang, Ying
2024-09-03  1:50       ` mawupeng
2024-09-03  8:09         ` Michal Hocko
2024-09-04  6:49           ` mawupeng
2024-09-04  7:28             ` Michal Hocko
2024-09-10 12:11               ` mawupeng
2024-09-10 13:11                 ` Michal Hocko
2024-09-11  5:37                 ` Huang, Ying [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87h6amwy26.fsf@yhuang6-desk2.ccr.corp.intel.com \
    --to=ying.huang@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=dmaluka@chromium.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=liushixin2@huawei.com \
    --cc=mawupeng1@huawei.com \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@suse.com \
    --cc=wangkefeng.wang@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox