From: Mel Gorman <mgorman@techsingularity.net>
To: Chen Wandun <chenwandun@huawei.com>
Cc: akpm@linux-foundation.org, vbabka@suse.cz, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, wangkefeng.wang@huawei.com
Subject: Re: [PATCH] mm: fix pcp count beyond pcp high in pcplist allocation
Date: Tue, 1 Nov 2022 10:40:40 +0000 [thread overview]
Message-ID: <20221101104040.o6gqtyyd5d4pkhle@techsingularity.net> (raw)
In-Reply-To: <316bc0a2-34d9-e485-11d2-f3dffd0fdea4@huawei.com>
On Mon, Oct 31, 2022 at 11:37:35AM +0800, Chen Wandun wrote:
> > > > As is, the patch could result in a batch request of 0 and
> > > I foget this, the patch need some improve, thanks.
> > >
> > > > fall through to allocating from the zone list anyway defeating the
> > > > purpose of the PCP allocator and probably regressing performance in some
> > > > csaes.
> > > Same as I understand???how about set high/batch for each order in pcplist???
> > Using anything would than (X >> order) consumes storage. Even if storage
> > was to be used, selecting a value per-order would be impossible because
> > the correct value would depend on frequency of requests for each order.
> > That can only be determined at runtime and the cost of determining the
> > value would likely exceed the benefit.
>
> Can we set a experience value for pcp batch for each order during init
> stage?
I'm not sure what you mean by "experience value" but maybe you meant
experimental value?
> If so we can make accurately control for pcp size. Nowdays, the size of each
> order in pcp list is full of randomness. I dont konw which scheme is better
> for performance.
>
It is something that could be experimented with but the main question is
-- what should those per-order values be? One option would be to enforce
pcp->high for all high-order values except THP if THP is enabled. That would
limit some of the issues with pcp->high being exceeded as even if two THPs
are refilled, one of them is allocated immediately. I wasn't convinced it was
necessary when implementing high-order PCP support but it could be evaluated.
--
Mel Gorman
SUSE Labs
next prev parent reply other threads:[~2022-11-01 10:40 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-10-24 13:41 Chen Wandun
2022-10-24 14:55 ` Mel Gorman
2022-10-25 11:49 ` Chen Wandun
2022-10-25 13:19 ` Mel Gorman
2022-10-31 3:37 ` Chen Wandun
2022-11-01 10:40 ` Mel Gorman [this message]
2022-11-03 12:46 ` Chen Wandun
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20221101104040.o6gqtyyd5d4pkhle@techsingularity.net \
--to=mgorman@techsingularity.net \
--cc=akpm@linux-foundation.org \
--cc=chenwandun@huawei.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=vbabka@suse.cz \
--cc=wangkefeng.wang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox