linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Nikhil Dhama <nikdhama@amd.com>
To: Raghavendra K T <raghavendra.kt@amd.com>,
	Nikhil Dhama <nikhil.dhama@amd.com>,
	akpm@linux-foundation.org, ying.huang@linux.alibaba.com
Cc: Ying Huang <huang.ying.caritas@gmail.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Bharata B Rao <bharata@amd.com>,
	Raghavendra <raghavendra.kodsarathimmappa@amd.com>
Subject: Re: [PATCH -V2] mm: pcp: scale batch to reduce number of high order pcp flushes on deallocation
Date: Tue, 25 Mar 2025 22:53:16 +0530	[thread overview]
Message-ID: <21d55e78-de3e-4a95-acef-5fdc144f3a9a@amd.com> (raw)
In-Reply-To: <4c40bf22-292c-4a3a-bd32-4461c2d4f7d9@amd.com>


On 3/25/2025 1:30 PM, Raghavendra K T wrote:
> On 3/19/2025 1:44 PM, Nikhil Dhama wrote:
> [...]
>>> And, do you run network related workloads on one machine?  If so, 
>>> please
>>> try to run them on two machines instead, with clients and servers 
>>> run on
>>> different machines.  At least, please use different sockets for clients
>>> and servers.  Because larger pcp->free_count will make it easier to
>>> trigger free_high heuristics.  If that is the case, please try to
>>> optimize free_high heuristics directly too.
>>
>> I agree with Ying Huang, the above change is not the best possible 
>> fix for
>> the issue. On futher analysis I figured that root cause of the issue is
>> the frequent pcp high order flushes. During a 20sec iperf3 run
>> I observed on avg 5 pcp high order flushes in kernel v6.6, whereas, in
>> v6.7, I observed about 170 pcp high order flushes.
>> Tracing pcp->free_count, I figured with the patch v1 (patch I suggested
>> earlier) free_count is going into negatives which reduces the number of
>> times free_high heuristics is triggered hence reducing the high order
>> flushes.
>>
>> As Ying Huang Suggested, it helps the performance on increasing the 
>> batch size
>> for free_high heuristics. I tried different scaling factors to find best
>> suitable batch value for free_high heuristics,
>>
>>
>>             score    # free_high
>> -----------        -----    -----------
>> v6.6 (base)        100         4
>> v6.12 (batch*1)         69          170
>> batch*2             69          150
>> batch*4             74          101
>> batch*5            100           53
>> batch*6            100           36
>> batch*8            100        3
>>    scaling batch for free_high heuristics with a factor of 5 restores 
>> the
>> performance.
>
> Hello Nikhil,
>
> Thanks for looking further on this. But from design standpoint,
> how a batch-size of 5 is helping here is not clear (Andrew's original
> question).
>
> Any case can you post the patch-set in a new email so that the below
> patch is not lost in discussion thread?

Hi Raghavendra,

Thanks, I have posted the patch-set in a new email
link: 
https://lore.kernel.org/linux-mm/20250325171915.14384-1-nikhil.dhama@amd.com/ 

with a better explanation on  how scaling batch is helping here.

Thanks,
Nikhil



      reply	other threads:[~2025-03-25 17:23 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-01-07  9:17 [FIX PATCH] mm: pcp: fix pcp->free_count reduction on page allocation Nikhil Dhama
2025-01-08  5:05 ` Andrew Morton
2025-01-09 11:42   ` Nikhil Dhama
2025-01-15 11:06     ` Huang, Ying
2025-01-15 11:19   ` [FIX PATCH] mm: pcp: fix pcp->free_count reduction on page allocation, Huang, Ying
2025-01-29  4:31     ` Andrew Morton
2025-02-12  5:04       ` [FIX PATCH] mm: pcp: fix pcp->free_count reduction on page allocation Nikhil Dhama
2025-02-12  8:40         ` Huang, Ying
2025-02-12 10:06           ` Nikhil Dhama
2025-03-19  8:14           ` [PATCH -V2] mm: pcp: scale batch to reduce number of high order pcp flushes on deallocation Nikhil Dhama
2025-03-25  8:00             ` Raghavendra K T
2025-03-25 17:23               ` Nikhil Dhama [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=21d55e78-de3e-4a95-acef-5fdc144f3a9a@amd.com \
    --to=nikdhama@amd.com \
    --cc=akpm@linux-foundation.org \
    --cc=bharata@amd.com \
    --cc=huang.ying.caritas@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=nikhil.dhama@amd.com \
    --cc=raghavendra.kodsarathimmappa@amd.com \
    --cc=raghavendra.kt@amd.com \
    --cc=ying.huang@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox