linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Nanyong Sun <sunnanyong@huawei.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: <hughd@google.com>, <akpm@linux-foundation.org>,
	<david@redhat.com>, <ryan.roberts@arm.com>, <baohua@kernel.org>,
	<baolin.wang@linux.alibaba.com>, <ioworker0@gmail.com>,
	<peterx@redhat.com>, <ziy@nvidia.com>,
	<wangkefeng.wang@huawei.com>, <linux-mm@kvack.org>,
	<linux-kernel@vger.kernel.org>
Subject: Re: [RFC PATCH] mm: control mthp per process/cgroup
Date: Mon, 19 Aug 2024 13:58:15 +0800	[thread overview]
Message-ID: <3ac1e404-a531-a380-7a2f-6adae4640da6@huawei.com> (raw)
In-Reply-To: <Zr-XVn1ExJ7_LSLS@casper.infradead.org>

[-- Attachment #1: Type: text/plain, Size: 1671 bytes --]

On 2024/8/17 2:15, Matthew Wilcox wrote:

> On Fri, Aug 16, 2024 at 05:13:27PM +0800, Nanyong Sun wrote:
>> Now the large folio control interfaces is system wide and tend to be
>> default on: file systems use large folio by default if supported,
>> mTHP is tend to default enable when boot [1].
>> When large folio enabled, some workloads have performance benefit,
>> but some may not and some side effects can happen: the memory usage
>> may increase, direct reclaim maybe more frequently because of more
>> large order allocations, result in cpu usage also increases. We observed
>> this on a product environment which run nginx, the pgscan_direct count
>> increased a lot than before, can reach to 3000 times per second, and
>> disable file large folio can fix this.
> Can you share any details of your nginx workload that shows a regression?
> The heuristics for allocating large folios are completely untuned, so
> having data for a workload which performs better with small folios is
> very valuable.
>
> .
The RPS(/Requests per second/) which is the performance metric of nginx 
workload has no
regression(also no improvement),we just observed that  pgscan_direct 
rate is much higher
with large folio.
So far, we have tested some workloads' benchmark, some did not have 
performance improvement
but also did not have regression.
In a production environment, different workloads may be deployed on a 
machine. Therefore,
do we need to add a process/cgroup level control to prevent workloads 
that will not have
performance improvement from using mTHP? In this way, the memory 
overhead and direct reclaim
caused by mTHP can be avoided for those process/cgroup.

[-- Attachment #2: Type: text/html, Size: 2813 bytes --]

  reply	other threads:[~2024-08-19  5:58 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-08-16  9:13 Nanyong Sun
2024-08-16 18:15 ` Matthew Wilcox
2024-08-19  5:58   ` Nanyong Sun [this message]
2024-08-26  2:26     ` Nanyong Sun
2024-09-02  9:36     ` Baolin Wang
2024-09-02 13:33       ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3ac1e404-a531-a380-7a2f-6adae4640da6@huawei.com \
    --to=sunnanyong@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=baohua@kernel.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=david@redhat.com \
    --cc=hughd@google.com \
    --cc=ioworker0@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=peterx@redhat.com \
    --cc=ryan.roberts@arm.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=willy@infradead.org \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox